I0126 21:07:55.246251 8 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0126 21:07:55.246859 8 e2e.go:109] Starting e2e run "8b6cb1df-2424-43ca-8fbc-531fea4666d9" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1580072873 - Will randomize all specs Will run 278 of 4814 specs Jan 26 21:07:55.314: INFO: >>> kubeConfig: /root/.kube/config Jan 26 21:07:55.319: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 26 21:07:55.374: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 26 21:07:55.409: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 26 21:07:55.409: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 26 21:07:55.409: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 26 21:07:55.419: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 26 21:07:55.419: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 26 21:07:55.419: INFO: e2e test version: v1.17.0 Jan 26 21:07:55.421: INFO: kube-apiserver version: v1.17.0 Jan 26 21:07:55.421: INFO: >>> kubeConfig: /root/.kube/config Jan 26 21:07:55.427: INFO: Cluster IP family: ipv4 [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:07:55.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap Jan 26 21:07:55.570: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-8ec4213a-c907-4e6f-b279-7260ac98cf98 STEP: Creating a pod to test consume configMaps Jan 26 21:07:55.591: INFO: Waiting up to 5m0s for pod "pod-configmaps-0b8f1358-3902-46a1-98c0-7c456259eb86" in namespace "configmap-4499" to be "success or failure" Jan 26 21:07:55.605: INFO: Pod "pod-configmaps-0b8f1358-3902-46a1-98c0-7c456259eb86": Phase="Pending", Reason="", readiness=false. Elapsed: 14.120642ms Jan 26 21:07:57.617: INFO: Pod "pod-configmaps-0b8f1358-3902-46a1-98c0-7c456259eb86": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025642117s Jan 26 21:07:59.630: INFO: Pod "pod-configmaps-0b8f1358-3902-46a1-98c0-7c456259eb86": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039005997s Jan 26 21:08:01.639: INFO: Pod "pod-configmaps-0b8f1358-3902-46a1-98c0-7c456259eb86": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047961451s Jan 26 21:08:03.648: INFO: Pod "pod-configmaps-0b8f1358-3902-46a1-98c0-7c456259eb86": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.056502417s STEP: Saw pod success Jan 26 21:08:03.648: INFO: Pod "pod-configmaps-0b8f1358-3902-46a1-98c0-7c456259eb86" satisfied condition "success or failure" Jan 26 21:08:03.653: INFO: Trying to get logs from node jerma-node pod pod-configmaps-0b8f1358-3902-46a1-98c0-7c456259eb86 container configmap-volume-test: STEP: delete the pod Jan 26 21:08:03.739: INFO: Waiting for pod pod-configmaps-0b8f1358-3902-46a1-98c0-7c456259eb86 to disappear Jan 26 21:08:03.745: INFO: Pod pod-configmaps-0b8f1358-3902-46a1-98c0-7c456259eb86 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:08:03.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4499" for this suite. • [SLOW TEST:8.373 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":1,"skipped":0,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:08:03.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition Jan 26 21:08:03.964: INFO: Waiting up to 5m0s for pod "var-expansion-00b36eea-5988-4447-b7a9-e8adb3aef562" in namespace "var-expansion-1631" to be "success or failure" Jan 26 21:08:03.980: INFO: Pod "var-expansion-00b36eea-5988-4447-b7a9-e8adb3aef562": Phase="Pending", Reason="", readiness=false. Elapsed: 15.374447ms Jan 26 21:08:05.989: INFO: Pod "var-expansion-00b36eea-5988-4447-b7a9-e8adb3aef562": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024251311s Jan 26 21:08:07.996: INFO: Pod "var-expansion-00b36eea-5988-4447-b7a9-e8adb3aef562": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031913845s Jan 26 21:08:10.004: INFO: Pod "var-expansion-00b36eea-5988-4447-b7a9-e8adb3aef562": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039237627s Jan 26 21:08:12.011: INFO: Pod "var-expansion-00b36eea-5988-4447-b7a9-e8adb3aef562": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.046626926s STEP: Saw pod success Jan 26 21:08:12.011: INFO: Pod "var-expansion-00b36eea-5988-4447-b7a9-e8adb3aef562" satisfied condition "success or failure" Jan 26 21:08:12.017: INFO: Trying to get logs from node jerma-node pod var-expansion-00b36eea-5988-4447-b7a9-e8adb3aef562 container dapi-container: STEP: delete the pod Jan 26 21:08:12.104: INFO: Waiting for pod var-expansion-00b36eea-5988-4447-b7a9-e8adb3aef562 to disappear Jan 26 21:08:12.119: INFO: Pod var-expansion-00b36eea-5988-4447-b7a9-e8adb3aef562 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:08:12.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1631" for this suite. • [SLOW TEST:8.334 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":2,"skipped":15,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:08:12.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:08:12.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-9913" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":3,"skipped":27,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:08:12.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-6264, will wait for the garbage collector to delete the pods Jan 26 21:08:22.737: INFO: Deleting Job.batch foo took: 7.969148ms Jan 26 21:08:23.137: INFO: Terminating Job.batch foo pods took: 400.34102ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:08:58.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6264" for this suite. • [SLOW TEST:46.157 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":4,"skipped":70,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:08:58.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 26 21:08:58.746: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:09:04.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5875" for this suite. • [SLOW TEST:5.871 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":5,"skipped":71,"failed":0} SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:09:04.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-11bdb32e-8107-4678-93ce-c85a97ca6d45 STEP: Creating a pod to test consume secrets Jan 26 21:09:04.612: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-59f146b3-acec-48fa-9ace-086c5226daa8" in namespace "projected-7886" to be "success or failure" Jan 26 21:09:04.633: INFO: Pod "pod-projected-secrets-59f146b3-acec-48fa-9ace-086c5226daa8": Phase="Pending", Reason="", readiness=false. Elapsed: 20.682341ms Jan 26 21:09:06.639: INFO: Pod "pod-projected-secrets-59f146b3-acec-48fa-9ace-086c5226daa8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026336901s Jan 26 21:09:08.647: INFO: Pod "pod-projected-secrets-59f146b3-acec-48fa-9ace-086c5226daa8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034594871s Jan 26 21:09:10.654: INFO: Pod "pod-projected-secrets-59f146b3-acec-48fa-9ace-086c5226daa8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04165541s Jan 26 21:09:12.666: INFO: Pod "pod-projected-secrets-59f146b3-acec-48fa-9ace-086c5226daa8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.053196667s STEP: Saw pod success Jan 26 21:09:12.666: INFO: Pod "pod-projected-secrets-59f146b3-acec-48fa-9ace-086c5226daa8" satisfied condition "success or failure" Jan 26 21:09:12.669: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-59f146b3-acec-48fa-9ace-086c5226daa8 container projected-secret-volume-test: STEP: delete the pod Jan 26 21:09:12.806: INFO: Waiting for pod pod-projected-secrets-59f146b3-acec-48fa-9ace-086c5226daa8 to disappear Jan 26 21:09:12.872: INFO: Pod pod-projected-secrets-59f146b3-acec-48fa-9ace-086c5226daa8 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:09:12.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7886" for this suite. • [SLOW TEST:8.518 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":6,"skipped":73,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:09:12.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Jan 26 21:09:13.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6201' Jan 26 21:09:15.632: INFO: stderr: "" Jan 26 21:09:15.632: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 26 21:09:15.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6201' Jan 26 21:09:15.850: INFO: stderr: "" Jan 26 21:09:15.850: INFO: stdout: "update-demo-nautilus-4xbtv update-demo-nautilus-zfgwl " Jan 26 21:09:15.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4xbtv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6201' Jan 26 21:09:16.823: INFO: stderr: "" Jan 26 21:09:16.823: INFO: stdout: "" Jan 26 21:09:16.823: INFO: update-demo-nautilus-4xbtv is created but not running Jan 26 21:09:21.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6201' Jan 26 21:09:22.133: INFO: stderr: "" Jan 26 21:09:22.133: INFO: stdout: "update-demo-nautilus-4xbtv update-demo-nautilus-zfgwl " Jan 26 21:09:22.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4xbtv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6201' Jan 26 21:09:22.623: INFO: stderr: "" Jan 26 21:09:22.623: INFO: stdout: "" Jan 26 21:09:22.623: INFO: update-demo-nautilus-4xbtv is created but not running Jan 26 21:09:27.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6201' Jan 26 21:09:27.784: INFO: stderr: "" Jan 26 21:09:27.784: INFO: stdout: "update-demo-nautilus-4xbtv update-demo-nautilus-zfgwl " Jan 26 21:09:27.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4xbtv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6201' Jan 26 21:09:27.911: INFO: stderr: "" Jan 26 21:09:27.911: INFO: stdout: "true" Jan 26 21:09:27.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4xbtv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6201' Jan 26 21:09:28.019: INFO: stderr: "" Jan 26 21:09:28.019: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 26 21:09:28.019: INFO: validating pod update-demo-nautilus-4xbtv Jan 26 21:09:28.036: INFO: got data: { "image": "nautilus.jpg" } Jan 26 21:09:28.037: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 26 21:09:28.037: INFO: update-demo-nautilus-4xbtv is verified up and running Jan 26 21:09:28.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zfgwl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6201' Jan 26 21:09:28.142: INFO: stderr: "" Jan 26 21:09:28.142: INFO: stdout: "true" Jan 26 21:09:28.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zfgwl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6201' Jan 26 21:09:28.240: INFO: stderr: "" Jan 26 21:09:28.240: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 26 21:09:28.240: INFO: validating pod update-demo-nautilus-zfgwl Jan 26 21:09:28.250: INFO: got data: { "image": "nautilus.jpg" } Jan 26 21:09:28.250: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 26 21:09:28.250: INFO: update-demo-nautilus-zfgwl is verified up and running STEP: scaling down the replication controller Jan 26 21:09:28.253: INFO: scanned /root for discovery docs: Jan 26 21:09:28.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-6201' Jan 26 21:09:29.480: INFO: stderr: "" Jan 26 21:09:29.480: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 26 21:09:29.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6201' Jan 26 21:09:29.777: INFO: stderr: "" Jan 26 21:09:29.778: INFO: stdout: "update-demo-nautilus-4xbtv update-demo-nautilus-zfgwl " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 26 21:09:34.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6201' Jan 26 21:09:34.942: INFO: stderr: "" Jan 26 21:09:34.942: INFO: stdout: "update-demo-nautilus-4xbtv update-demo-nautilus-zfgwl " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 26 21:09:39.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6201' Jan 26 21:09:40.098: INFO: stderr: "" Jan 26 21:09:40.098: INFO: stdout: "update-demo-nautilus-4xbtv update-demo-nautilus-zfgwl " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 26 21:09:45.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6201' Jan 26 21:09:45.280: INFO: stderr: "" Jan 26 21:09:45.280: INFO: stdout: "update-demo-nautilus-zfgwl " Jan 26 21:09:45.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zfgwl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6201' Jan 26 21:09:45.370: INFO: stderr: "" Jan 26 21:09:45.370: INFO: stdout: "true" Jan 26 21:09:45.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zfgwl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6201' Jan 26 21:09:45.454: INFO: stderr: "" Jan 26 21:09:45.454: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 26 21:09:45.454: INFO: validating pod update-demo-nautilus-zfgwl Jan 26 21:09:45.459: INFO: got data: { "image": "nautilus.jpg" } Jan 26 21:09:45.459: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 26 21:09:45.459: INFO: update-demo-nautilus-zfgwl is verified up and running STEP: scaling up the replication controller Jan 26 21:09:45.463: INFO: scanned /root for discovery docs: Jan 26 21:09:45.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-6201' Jan 26 21:09:46.571: INFO: stderr: "" Jan 26 21:09:46.572: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 26 21:09:46.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6201' Jan 26 21:09:46.679: INFO: stderr: "" Jan 26 21:09:46.679: INFO: stdout: "update-demo-nautilus-82hfq update-demo-nautilus-zfgwl " Jan 26 21:09:46.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-82hfq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6201' Jan 26 21:09:46.775: INFO: stderr: "" Jan 26 21:09:46.775: INFO: stdout: "" Jan 26 21:09:46.775: INFO: update-demo-nautilus-82hfq is created but not running Jan 26 21:09:51.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6201' Jan 26 21:09:52.124: INFO: stderr: "" Jan 26 21:09:52.124: INFO: stdout: "update-demo-nautilus-82hfq update-demo-nautilus-zfgwl " Jan 26 21:09:52.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-82hfq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6201' Jan 26 21:09:52.321: INFO: stderr: "" Jan 26 21:09:52.321: INFO: stdout: "" Jan 26 21:09:52.321: INFO: update-demo-nautilus-82hfq is created but not running Jan 26 21:09:57.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6201' Jan 26 21:09:57.537: INFO: stderr: "" Jan 26 21:09:57.537: INFO: stdout: "update-demo-nautilus-82hfq update-demo-nautilus-zfgwl " Jan 26 21:09:57.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-82hfq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6201' Jan 26 21:09:57.669: INFO: stderr: "" Jan 26 21:09:57.669: INFO: stdout: "true" Jan 26 21:09:57.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-82hfq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6201' Jan 26 21:09:57.794: INFO: stderr: "" Jan 26 21:09:57.794: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 26 21:09:57.794: INFO: validating pod update-demo-nautilus-82hfq Jan 26 21:09:57.804: INFO: got data: { "image": "nautilus.jpg" } Jan 26 21:09:57.804: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 26 21:09:57.804: INFO: update-demo-nautilus-82hfq is verified up and running Jan 26 21:09:57.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zfgwl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6201' Jan 26 21:09:57.970: INFO: stderr: "" Jan 26 21:09:57.971: INFO: stdout: "true" Jan 26 21:09:57.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zfgwl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6201' Jan 26 21:09:58.086: INFO: stderr: "" Jan 26 21:09:58.086: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 26 21:09:58.086: INFO: validating pod update-demo-nautilus-zfgwl Jan 26 21:09:58.090: INFO: got data: { "image": "nautilus.jpg" } Jan 26 21:09:58.090: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 26 21:09:58.090: INFO: update-demo-nautilus-zfgwl is verified up and running STEP: using delete to clean up resources Jan 26 21:09:58.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6201' Jan 26 21:09:58.205: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 26 21:09:58.205: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 26 21:09:58.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6201' Jan 26 21:09:58.340: INFO: stderr: "No resources found in kubectl-6201 namespace.\n" Jan 26 21:09:58.340: INFO: stdout: "" Jan 26 21:09:58.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6201 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 26 21:09:58.426: INFO: stderr: "" Jan 26 21:09:58.426: INFO: stdout: "update-demo-nautilus-82hfq\nupdate-demo-nautilus-zfgwl\n" Jan 26 21:09:58.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6201' Jan 26 21:09:59.716: INFO: stderr: "No resources found in kubectl-6201 namespace.\n" Jan 26 21:09:59.716: INFO: stdout: "" Jan 26 21:09:59.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6201 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 26 21:09:59.922: INFO: stderr: "" Jan 26 21:09:59.922: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:09:59.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6201" for this suite. • [SLOW TEST:46.955 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":7,"skipped":93,"failed":0} SS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:09:59.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-b682b9f9-fdc9-4dec-a2c9-f823a4aa12e5 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-b682b9f9-fdc9-4dec-a2c9-f823a4aa12e5 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:11:27.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5382" for this suite. • [SLOW TEST:87.384 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":8,"skipped":95,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:11:27.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-8648 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 26 21:11:27.408: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 26 21:12:03.619: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.2 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8648 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 26 21:12:03.619: INFO: >>> kubeConfig: /root/.kube/config I0126 21:12:03.683913 8 log.go:172] (0xc000e4b970) (0xc0027f4320) Create stream I0126 21:12:03.684032 8 log.go:172] (0xc000e4b970) (0xc0027f4320) Stream added, broadcasting: 1 I0126 21:12:03.691234 8 log.go:172] (0xc000e4b970) Reply frame received for 1 I0126 21:12:03.691339 8 log.go:172] (0xc000e4b970) (0xc0024d01e0) Create stream I0126 21:12:03.691359 8 log.go:172] (0xc000e4b970) (0xc0024d01e0) Stream added, broadcasting: 3 I0126 21:12:03.693358 8 log.go:172] (0xc000e4b970) Reply frame received for 3 I0126 21:12:03.693401 8 log.go:172] (0xc000e4b970) (0xc00290a000) Create stream I0126 21:12:03.693419 8 log.go:172] (0xc000e4b970) (0xc00290a000) Stream added, broadcasting: 5 I0126 21:12:03.694993 8 log.go:172] (0xc000e4b970) Reply frame received for 5 I0126 21:12:04.779316 8 log.go:172] (0xc000e4b970) Data frame received for 3 I0126 21:12:04.779445 8 log.go:172] (0xc0024d01e0) (3) Data frame handling I0126 21:12:04.779498 8 log.go:172] (0xc0024d01e0) (3) Data frame sent I0126 21:12:04.896404 8 log.go:172] (0xc000e4b970) (0xc0024d01e0) Stream removed, broadcasting: 3 I0126 21:12:04.896574 8 log.go:172] (0xc000e4b970) Data frame received for 1 I0126 21:12:04.896621 8 log.go:172] (0xc000e4b970) (0xc00290a000) Stream removed, broadcasting: 5 I0126 21:12:04.896705 8 log.go:172] (0xc0027f4320) (1) Data frame handling I0126 21:12:04.896743 8 log.go:172] (0xc0027f4320) (1) Data frame sent I0126 21:12:04.896772 8 log.go:172] (0xc000e4b970) (0xc0027f4320) Stream removed, broadcasting: 1 I0126 21:12:04.896794 8 log.go:172] (0xc000e4b970) Go away received I0126 21:12:04.897899 8 log.go:172] (0xc000e4b970) (0xc0027f4320) Stream removed, broadcasting: 1 I0126 21:12:04.897917 8 log.go:172] (0xc000e4b970) (0xc0024d01e0) Stream removed, broadcasting: 3 I0126 21:12:04.897932 8 log.go:172] (0xc000e4b970) (0xc00290a000) Stream removed, broadcasting: 5 Jan 26 21:12:04.897: INFO: Found all expected endpoints: [netserver-0] Jan 26 21:12:04.906: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8648 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 26 21:12:04.906: INFO: >>> kubeConfig: /root/.kube/config I0126 21:12:04.964929 8 log.go:172] (0xc00164c2c0) (0xc001fcc5a0) Create stream I0126 21:12:04.965095 8 log.go:172] (0xc00164c2c0) (0xc001fcc5a0) Stream added, broadcasting: 1 I0126 21:12:04.973111 8 log.go:172] (0xc00164c2c0) Reply frame received for 1 I0126 21:12:04.973159 8 log.go:172] (0xc00164c2c0) (0xc001fcc640) Create stream I0126 21:12:04.973175 8 log.go:172] (0xc00164c2c0) (0xc001fcc640) Stream added, broadcasting: 3 I0126 21:12:04.974616 8 log.go:172] (0xc00164c2c0) Reply frame received for 3 I0126 21:12:04.974654 8 log.go:172] (0xc00164c2c0) (0xc0027840a0) Create stream I0126 21:12:04.974666 8 log.go:172] (0xc00164c2c0) (0xc0027840a0) Stream added, broadcasting: 5 I0126 21:12:04.977032 8 log.go:172] (0xc00164c2c0) Reply frame received for 5 I0126 21:12:06.059702 8 log.go:172] (0xc00164c2c0) Data frame received for 3 I0126 21:12:06.059940 8 log.go:172] (0xc001fcc640) (3) Data frame handling I0126 21:12:06.060843 8 log.go:172] (0xc001fcc640) (3) Data frame sent I0126 21:12:06.134919 8 log.go:172] (0xc00164c2c0) Data frame received for 1 I0126 21:12:06.134992 8 log.go:172] (0xc001fcc5a0) (1) Data frame handling I0126 21:12:06.135027 8 log.go:172] (0xc001fcc5a0) (1) Data frame sent I0126 21:12:06.135071 8 log.go:172] (0xc00164c2c0) (0xc001fcc5a0) Stream removed, broadcasting: 1 I0126 21:12:06.135385 8 log.go:172] (0xc00164c2c0) (0xc001fcc640) Stream removed, broadcasting: 3 I0126 21:12:06.135456 8 log.go:172] (0xc00164c2c0) (0xc0027840a0) Stream removed, broadcasting: 5 I0126 21:12:06.135524 8 log.go:172] (0xc00164c2c0) Go away received I0126 21:12:06.135560 8 log.go:172] (0xc00164c2c0) (0xc001fcc5a0) Stream removed, broadcasting: 1 I0126 21:12:06.135573 8 log.go:172] (0xc00164c2c0) (0xc001fcc640) Stream removed, broadcasting: 3 I0126 21:12:06.135588 8 log.go:172] (0xc00164c2c0) (0xc0027840a0) Stream removed, broadcasting: 5 Jan 26 21:12:06.135: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:12:06.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8648" for this suite. • [SLOW TEST:38.822 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":9,"skipped":111,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:12:06.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0126 21:12:17.795208 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 26 21:12:17.795: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:12:17.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-835" for this suite. • [SLOW TEST:11.810 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":10,"skipped":195,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:12:17.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1877 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 26 21:12:18.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-2965' Jan 26 21:12:19.048: INFO: stderr: "" Jan 26 21:12:19.048: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Jan 26 21:12:34.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-2965 -o json' Jan 26 21:12:34.256: INFO: stderr: "" Jan 26 21:12:34.256: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-01-26T21:12:19Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-2965\",\n \"resourceVersion\": \"4533711\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-2965/pods/e2e-test-httpd-pod\",\n \"uid\": \"42e34082-ae5e-4a69-a848-ed026bed8fb7\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-dgnfc\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-node\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-dgnfc\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-dgnfc\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-26T21:12:19Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-26T21:12:30Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-26T21:12:30Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-26T21:12:19Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://12a1e3ad572902f9d2ba5f2959ee57371a77bb872a0e086a40957d4bf0f28e4f\",\n \"image\": \"httpd:2.4.38-alpine\",\n \"imageID\": \"docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-01-26T21:12:28Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.96.2.250\",\n \"phase\": \"Running\",\n \"podIP\": \"10.44.0.1\",\n \"podIPs\": [\n {\n \"ip\": \"10.44.0.1\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-01-26T21:12:19Z\"\n }\n}\n" STEP: replace the image in the pod Jan 26 21:12:34.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-2965' Jan 26 21:12:34.586: INFO: stderr: "" Jan 26 21:12:34.587: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1882 Jan 26 21:12:34.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-2965' Jan 26 21:12:41.277: INFO: stderr: "" Jan 26 21:12:41.277: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:12:41.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2965" for this suite. • [SLOW TEST:23.349 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1873 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":11,"skipped":238,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:12:41.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 26 21:12:42.330: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 26 21:12:44.406: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715669962, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715669962, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715669962, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715669962, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 26 21:12:46.416: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715669962, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715669962, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715669962, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715669962, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 26 21:12:48.451: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715669962, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715669962, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715669962, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715669962, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 26 21:12:51.490: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:12:52.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8143" for this suite. STEP: Destroying namespace "webhook-8143-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.263 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":12,"skipped":266,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:12:52.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jan 26 21:13:12.761: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2156 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 26 21:13:12.761: INFO: >>> kubeConfig: /root/.kube/config I0126 21:13:12.827753 8 log.go:172] (0xc000e4bef0) (0xc0027f5d60) Create stream I0126 21:13:12.827830 8 log.go:172] (0xc000e4bef0) (0xc0027f5d60) Stream added, broadcasting: 1 I0126 21:13:12.831811 8 log.go:172] (0xc000e4bef0) Reply frame received for 1 I0126 21:13:12.831863 8 log.go:172] (0xc000e4bef0) (0xc00223dea0) Create stream I0126 21:13:12.831883 8 log.go:172] (0xc000e4bef0) (0xc00223dea0) Stream added, broadcasting: 3 I0126 21:13:12.834243 8 log.go:172] (0xc000e4bef0) Reply frame received for 3 I0126 21:13:12.834335 8 log.go:172] (0xc000e4bef0) (0xc002426000) Create stream I0126 21:13:12.834349 8 log.go:172] (0xc000e4bef0) (0xc002426000) Stream added, broadcasting: 5 I0126 21:13:12.836356 8 log.go:172] (0xc000e4bef0) Reply frame received for 5 I0126 21:13:12.941775 8 log.go:172] (0xc000e4bef0) Data frame received for 3 I0126 21:13:12.941840 8 log.go:172] (0xc00223dea0) (3) Data frame handling I0126 21:13:12.941875 8 log.go:172] (0xc00223dea0) (3) Data frame sent I0126 21:13:13.056089 8 log.go:172] (0xc000e4bef0) (0xc00223dea0) Stream removed, broadcasting: 3 I0126 21:13:13.056953 8 log.go:172] (0xc000e4bef0) Data frame received for 1 I0126 21:13:13.057117 8 log.go:172] (0xc0027f5d60) (1) Data frame handling I0126 21:13:13.057167 8 log.go:172] (0xc0027f5d60) (1) Data frame sent I0126 21:13:13.057218 8 log.go:172] (0xc000e4bef0) (0xc002426000) Stream removed, broadcasting: 5 I0126 21:13:13.057443 8 log.go:172] (0xc000e4bef0) (0xc0027f5d60) Stream removed, broadcasting: 1 I0126 21:13:13.057543 8 log.go:172] (0xc000e4bef0) Go away received I0126 21:13:13.057800 8 log.go:172] (0xc000e4bef0) (0xc0027f5d60) Stream removed, broadcasting: 1 I0126 21:13:13.057822 8 log.go:172] (0xc000e4bef0) (0xc00223dea0) Stream removed, broadcasting: 3 I0126 21:13:13.057855 8 log.go:172] (0xc000e4bef0) (0xc002426000) Stream removed, broadcasting: 5 Jan 26 21:13:13.057: INFO: Exec stderr: "" Jan 26 21:13:13.058: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2156 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 26 21:13:13.058: INFO: >>> kubeConfig: /root/.kube/config I0126 21:13:13.103761 8 log.go:172] (0xc00164cb00) (0xc001fccf00) Create stream I0126 21:13:13.103852 8 log.go:172] (0xc00164cb00) (0xc001fccf00) Stream added, broadcasting: 1 I0126 21:13:13.108205 8 log.go:172] (0xc00164cb00) Reply frame received for 1 I0126 21:13:13.108261 8 log.go:172] (0xc00164cb00) (0xc0021325a0) Create stream I0126 21:13:13.108283 8 log.go:172] (0xc00164cb00) (0xc0021325a0) Stream added, broadcasting: 3 I0126 21:13:13.109501 8 log.go:172] (0xc00164cb00) Reply frame received for 3 I0126 21:13:13.109524 8 log.go:172] (0xc00164cb00) (0xc002132640) Create stream I0126 21:13:13.109536 8 log.go:172] (0xc00164cb00) (0xc002132640) Stream added, broadcasting: 5 I0126 21:13:13.110699 8 log.go:172] (0xc00164cb00) Reply frame received for 5 I0126 21:13:13.182538 8 log.go:172] (0xc00164cb00) Data frame received for 3 I0126 21:13:13.182663 8 log.go:172] (0xc0021325a0) (3) Data frame handling I0126 21:13:13.182707 8 log.go:172] (0xc0021325a0) (3) Data frame sent I0126 21:13:13.248178 8 log.go:172] (0xc00164cb00) Data frame received for 1 I0126 21:13:13.248249 8 log.go:172] (0xc00164cb00) (0xc0021325a0) Stream removed, broadcasting: 3 I0126 21:13:13.248290 8 log.go:172] (0xc001fccf00) (1) Data frame handling I0126 21:13:13.248314 8 log.go:172] (0xc001fccf00) (1) Data frame sent I0126 21:13:13.248384 8 log.go:172] (0xc00164cb00) (0xc002132640) Stream removed, broadcasting: 5 I0126 21:13:13.248454 8 log.go:172] (0xc00164cb00) (0xc001fccf00) Stream removed, broadcasting: 1 I0126 21:13:13.248471 8 log.go:172] (0xc00164cb00) Go away received I0126 21:13:13.248662 8 log.go:172] (0xc00164cb00) (0xc001fccf00) Stream removed, broadcasting: 1 I0126 21:13:13.248677 8 log.go:172] (0xc00164cb00) (0xc0021325a0) Stream removed, broadcasting: 3 I0126 21:13:13.248683 8 log.go:172] (0xc00164cb00) (0xc002132640) Stream removed, broadcasting: 5 Jan 26 21:13:13.248: INFO: Exec stderr: "" Jan 26 21:13:13.248: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2156 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 26 21:13:13.248: INFO: >>> kubeConfig: /root/.kube/config I0126 21:13:13.293867 8 log.go:172] (0xc00164d1e0) (0xc001fcd180) Create stream I0126 21:13:13.293995 8 log.go:172] (0xc00164d1e0) (0xc001fcd180) Stream added, broadcasting: 1 I0126 21:13:13.301919 8 log.go:172] (0xc00164d1e0) Reply frame received for 1 I0126 21:13:13.301976 8 log.go:172] (0xc00164d1e0) (0xc0021fc000) Create stream I0126 21:13:13.301992 8 log.go:172] (0xc00164d1e0) (0xc0021fc000) Stream added, broadcasting: 3 I0126 21:13:13.303999 8 log.go:172] (0xc00164d1e0) Reply frame received for 3 I0126 21:13:13.304044 8 log.go:172] (0xc00164d1e0) (0xc0027f5ea0) Create stream I0126 21:13:13.304071 8 log.go:172] (0xc00164d1e0) (0xc0027f5ea0) Stream added, broadcasting: 5 I0126 21:13:13.306690 8 log.go:172] (0xc00164d1e0) Reply frame received for 5 I0126 21:13:13.370764 8 log.go:172] (0xc00164d1e0) Data frame received for 3 I0126 21:13:13.370794 8 log.go:172] (0xc0021fc000) (3) Data frame handling I0126 21:13:13.370821 8 log.go:172] (0xc0021fc000) (3) Data frame sent I0126 21:13:13.436894 8 log.go:172] (0xc00164d1e0) Data frame received for 1 I0126 21:13:13.436953 8 log.go:172] (0xc001fcd180) (1) Data frame handling I0126 21:13:13.437053 8 log.go:172] (0xc001fcd180) (1) Data frame sent I0126 21:13:13.437085 8 log.go:172] (0xc00164d1e0) (0xc001fcd180) Stream removed, broadcasting: 1 I0126 21:13:13.437627 8 log.go:172] (0xc00164d1e0) (0xc0027f5ea0) Stream removed, broadcasting: 5 I0126 21:13:13.437681 8 log.go:172] (0xc00164d1e0) (0xc0021fc000) Stream removed, broadcasting: 3 I0126 21:13:13.437719 8 log.go:172] (0xc00164d1e0) (0xc001fcd180) Stream removed, broadcasting: 1 I0126 21:13:13.437728 8 log.go:172] (0xc00164d1e0) (0xc0021fc000) Stream removed, broadcasting: 3 I0126 21:13:13.437738 8 log.go:172] (0xc00164d1e0) (0xc0027f5ea0) Stream removed, broadcasting: 5 Jan 26 21:13:13.437: INFO: Exec stderr: "" Jan 26 21:13:13.438: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2156 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 26 21:13:13.438: INFO: >>> kubeConfig: /root/.kube/config I0126 21:13:13.487209 8 log.go:172] (0xc00164d760) (0xc001fcd360) Create stream I0126 21:13:13.487289 8 log.go:172] (0xc00164d760) (0xc001fcd360) Stream added, broadcasting: 1 I0126 21:13:13.492544 8 log.go:172] (0xc00164d760) Reply frame received for 1 I0126 21:13:13.492589 8 log.go:172] (0xc00164d760) (0xc0027f5f40) Create stream I0126 21:13:13.492603 8 log.go:172] (0xc00164d760) (0xc0027f5f40) Stream added, broadcasting: 3 I0126 21:13:13.494154 8 log.go:172] (0xc00164d760) Reply frame received for 3 I0126 21:13:13.494199 8 log.go:172] (0xc00164d760) (0xc002132780) Create stream I0126 21:13:13.494209 8 log.go:172] (0xc00164d760) (0xc002132780) Stream added, broadcasting: 5 I0126 21:13:13.495339 8 log.go:172] (0xc00164d760) Reply frame received for 5 I0126 21:13:13.551935 8 log.go:172] (0xc00164d760) Data frame received for 3 I0126 21:13:13.551985 8 log.go:172] (0xc0027f5f40) (3) Data frame handling I0126 21:13:13.552022 8 log.go:172] (0xc0027f5f40) (3) Data frame sent I0126 21:13:13.632455 8 log.go:172] (0xc00164d760) (0xc0027f5f40) Stream removed, broadcasting: 3 I0126 21:13:13.632889 8 log.go:172] (0xc00164d760) Data frame received for 1 I0126 21:13:13.632911 8 log.go:172] (0xc001fcd360) (1) Data frame handling I0126 21:13:13.632956 8 log.go:172] (0xc001fcd360) (1) Data frame sent I0126 21:13:13.632981 8 log.go:172] (0xc00164d760) (0xc001fcd360) Stream removed, broadcasting: 1 I0126 21:13:13.633727 8 log.go:172] (0xc00164d760) (0xc002132780) Stream removed, broadcasting: 5 I0126 21:13:13.633921 8 log.go:172] (0xc00164d760) Go away received I0126 21:13:13.634199 8 log.go:172] (0xc00164d760) (0xc001fcd360) Stream removed, broadcasting: 1 I0126 21:13:13.634278 8 log.go:172] (0xc00164d760) (0xc0027f5f40) Stream removed, broadcasting: 3 I0126 21:13:13.634315 8 log.go:172] (0xc00164d760) (0xc002132780) Stream removed, broadcasting: 5 Jan 26 21:13:13.634: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jan 26 21:13:13.634: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2156 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 26 21:13:13.634: INFO: >>> kubeConfig: /root/.kube/config I0126 21:13:13.681834 8 log.go:172] (0xc00164dce0) (0xc001fcd4a0) Create stream I0126 21:13:13.682107 8 log.go:172] (0xc00164dce0) (0xc001fcd4a0) Stream added, broadcasting: 1 I0126 21:13:13.686503 8 log.go:172] (0xc00164dce0) Reply frame received for 1 I0126 21:13:13.686617 8 log.go:172] (0xc00164dce0) (0xc0024260a0) Create stream I0126 21:13:13.686634 8 log.go:172] (0xc00164dce0) (0xc0024260a0) Stream added, broadcasting: 3 I0126 21:13:13.687698 8 log.go:172] (0xc00164dce0) Reply frame received for 3 I0126 21:13:13.687724 8 log.go:172] (0xc00164dce0) (0xc0021b00a0) Create stream I0126 21:13:13.687740 8 log.go:172] (0xc00164dce0) (0xc0021b00a0) Stream added, broadcasting: 5 I0126 21:13:13.688774 8 log.go:172] (0xc00164dce0) Reply frame received for 5 I0126 21:13:13.755981 8 log.go:172] (0xc00164dce0) Data frame received for 3 I0126 21:13:13.756256 8 log.go:172] (0xc0024260a0) (3) Data frame handling I0126 21:13:13.756302 8 log.go:172] (0xc0024260a0) (3) Data frame sent I0126 21:13:13.887865 8 log.go:172] (0xc00164dce0) Data frame received for 1 I0126 21:13:13.888278 8 log.go:172] (0xc00164dce0) (0xc0024260a0) Stream removed, broadcasting: 3 I0126 21:13:13.888421 8 log.go:172] (0xc001fcd4a0) (1) Data frame handling I0126 21:13:13.888477 8 log.go:172] (0xc001fcd4a0) (1) Data frame sent I0126 21:13:13.888614 8 log.go:172] (0xc00164dce0) (0xc001fcd4a0) Stream removed, broadcasting: 1 I0126 21:13:13.889051 8 log.go:172] (0xc00164dce0) (0xc0021b00a0) Stream removed, broadcasting: 5 I0126 21:13:13.889425 8 log.go:172] (0xc00164dce0) Go away received I0126 21:13:13.889495 8 log.go:172] (0xc00164dce0) (0xc001fcd4a0) Stream removed, broadcasting: 1 I0126 21:13:13.889544 8 log.go:172] (0xc00164dce0) (0xc0024260a0) Stream removed, broadcasting: 3 I0126 21:13:13.889567 8 log.go:172] (0xc00164dce0) (0xc0021b00a0) Stream removed, broadcasting: 5 Jan 26 21:13:13.889: INFO: Exec stderr: "" Jan 26 21:13:13.889: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2156 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 26 21:13:13.889: INFO: >>> kubeConfig: /root/.kube/config I0126 21:13:13.961330 8 log.go:172] (0xc00351a370) (0xc001fcd680) Create stream I0126 21:13:13.961501 8 log.go:172] (0xc00351a370) (0xc001fcd680) Stream added, broadcasting: 1 I0126 21:13:13.974843 8 log.go:172] (0xc00351a370) Reply frame received for 1 I0126 21:13:13.975034 8 log.go:172] (0xc00351a370) (0xc0021b0140) Create stream I0126 21:13:13.975063 8 log.go:172] (0xc00351a370) (0xc0021b0140) Stream added, broadcasting: 3 I0126 21:13:13.980721 8 log.go:172] (0xc00351a370) Reply frame received for 3 I0126 21:13:13.980928 8 log.go:172] (0xc00351a370) (0xc002426140) Create stream I0126 21:13:13.980956 8 log.go:172] (0xc00351a370) (0xc002426140) Stream added, broadcasting: 5 I0126 21:13:13.985752 8 log.go:172] (0xc00351a370) Reply frame received for 5 I0126 21:13:14.115152 8 log.go:172] (0xc00351a370) Data frame received for 3 I0126 21:13:14.115222 8 log.go:172] (0xc0021b0140) (3) Data frame handling I0126 21:13:14.115259 8 log.go:172] (0xc0021b0140) (3) Data frame sent I0126 21:13:14.188246 8 log.go:172] (0xc00351a370) Data frame received for 1 I0126 21:13:14.188323 8 log.go:172] (0xc00351a370) (0xc0021b0140) Stream removed, broadcasting: 3 I0126 21:13:14.188372 8 log.go:172] (0xc001fcd680) (1) Data frame handling I0126 21:13:14.188410 8 log.go:172] (0xc001fcd680) (1) Data frame sent I0126 21:13:14.188451 8 log.go:172] (0xc00351a370) (0xc002426140) Stream removed, broadcasting: 5 I0126 21:13:14.188493 8 log.go:172] (0xc00351a370) (0xc001fcd680) Stream removed, broadcasting: 1 I0126 21:13:14.188531 8 log.go:172] (0xc00351a370) Go away received I0126 21:13:14.188720 8 log.go:172] (0xc00351a370) (0xc001fcd680) Stream removed, broadcasting: 1 I0126 21:13:14.188732 8 log.go:172] (0xc00351a370) (0xc0021b0140) Stream removed, broadcasting: 3 I0126 21:13:14.188742 8 log.go:172] (0xc00351a370) (0xc002426140) Stream removed, broadcasting: 5 Jan 26 21:13:14.188: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jan 26 21:13:14.188: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2156 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 26 21:13:14.189: INFO: >>> kubeConfig: /root/.kube/config I0126 21:13:14.224183 8 log.go:172] (0xc000472a50) (0xc002426460) Create stream I0126 21:13:14.224230 8 log.go:172] (0xc000472a50) (0xc002426460) Stream added, broadcasting: 1 I0126 21:13:14.227256 8 log.go:172] (0xc000472a50) Reply frame received for 1 I0126 21:13:14.227290 8 log.go:172] (0xc000472a50) (0xc002132820) Create stream I0126 21:13:14.227314 8 log.go:172] (0xc000472a50) (0xc002132820) Stream added, broadcasting: 3 I0126 21:13:14.228288 8 log.go:172] (0xc000472a50) Reply frame received for 3 I0126 21:13:14.228316 8 log.go:172] (0xc000472a50) (0xc001fcd720) Create stream I0126 21:13:14.228330 8 log.go:172] (0xc000472a50) (0xc001fcd720) Stream added, broadcasting: 5 I0126 21:13:14.229192 8 log.go:172] (0xc000472a50) Reply frame received for 5 I0126 21:13:14.276811 8 log.go:172] (0xc000472a50) Data frame received for 3 I0126 21:13:14.276853 8 log.go:172] (0xc002132820) (3) Data frame handling I0126 21:13:14.276893 8 log.go:172] (0xc002132820) (3) Data frame sent I0126 21:13:14.343484 8 log.go:172] (0xc000472a50) Data frame received for 1 I0126 21:13:14.343597 8 log.go:172] (0xc002426460) (1) Data frame handling I0126 21:13:14.343628 8 log.go:172] (0xc002426460) (1) Data frame sent I0126 21:13:14.343658 8 log.go:172] (0xc000472a50) (0xc002426460) Stream removed, broadcasting: 1 I0126 21:13:14.344875 8 log.go:172] (0xc000472a50) (0xc002132820) Stream removed, broadcasting: 3 I0126 21:13:14.345033 8 log.go:172] (0xc000472a50) (0xc001fcd720) Stream removed, broadcasting: 5 I0126 21:13:14.345078 8 log.go:172] (0xc000472a50) Go away received I0126 21:13:14.345129 8 log.go:172] (0xc000472a50) (0xc002426460) Stream removed, broadcasting: 1 I0126 21:13:14.345144 8 log.go:172] (0xc000472a50) (0xc002132820) Stream removed, broadcasting: 3 I0126 21:13:14.345159 8 log.go:172] (0xc000472a50) (0xc001fcd720) Stream removed, broadcasting: 5 Jan 26 21:13:14.345: INFO: Exec stderr: "" Jan 26 21:13:14.345: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2156 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 26 21:13:14.345: INFO: >>> kubeConfig: /root/.kube/config I0126 21:13:14.381890 8 log.go:172] (0xc00309c580) (0xc0021b0320) Create stream I0126 21:13:14.381954 8 log.go:172] (0xc00309c580) (0xc0021b0320) Stream added, broadcasting: 1 I0126 21:13:14.402600 8 log.go:172] (0xc00309c580) Reply frame received for 1 I0126 21:13:14.402860 8 log.go:172] (0xc00309c580) (0xc0020c00a0) Create stream I0126 21:13:14.402933 8 log.go:172] (0xc00309c580) (0xc0020c00a0) Stream added, broadcasting: 3 I0126 21:13:14.410338 8 log.go:172] (0xc00309c580) Reply frame received for 3 I0126 21:13:14.410369 8 log.go:172] (0xc00309c580) (0xc0020c0140) Create stream I0126 21:13:14.410378 8 log.go:172] (0xc00309c580) (0xc0020c0140) Stream added, broadcasting: 5 I0126 21:13:14.412025 8 log.go:172] (0xc00309c580) Reply frame received for 5 I0126 21:13:14.484360 8 log.go:172] (0xc00309c580) Data frame received for 3 I0126 21:13:14.484452 8 log.go:172] (0xc0020c00a0) (3) Data frame handling I0126 21:13:14.484515 8 log.go:172] (0xc0020c00a0) (3) Data frame sent I0126 21:13:14.577441 8 log.go:172] (0xc00309c580) Data frame received for 1 I0126 21:13:14.577558 8 log.go:172] (0xc00309c580) (0xc0020c0140) Stream removed, broadcasting: 5 I0126 21:13:14.577613 8 log.go:172] (0xc0021b0320) (1) Data frame handling I0126 21:13:14.577627 8 log.go:172] (0xc0021b0320) (1) Data frame sent I0126 21:13:14.577659 8 log.go:172] (0xc00309c580) (0xc0020c00a0) Stream removed, broadcasting: 3 I0126 21:13:14.577686 8 log.go:172] (0xc00309c580) (0xc0021b0320) Stream removed, broadcasting: 1 I0126 21:13:14.577695 8 log.go:172] (0xc00309c580) Go away received I0126 21:13:14.577914 8 log.go:172] (0xc00309c580) (0xc0021b0320) Stream removed, broadcasting: 1 I0126 21:13:14.577932 8 log.go:172] (0xc00309c580) (0xc0020c00a0) Stream removed, broadcasting: 3 I0126 21:13:14.577951 8 log.go:172] (0xc00309c580) (0xc0020c0140) Stream removed, broadcasting: 5 Jan 26 21:13:14.577: INFO: Exec stderr: "" Jan 26 21:13:14.578: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2156 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 26 21:13:14.578: INFO: >>> kubeConfig: /root/.kube/config I0126 21:13:14.614578 8 log.go:172] (0xc000e4bb80) (0xc00290a140) Create stream I0126 21:13:14.614652 8 log.go:172] (0xc000e4bb80) (0xc00290a140) Stream added, broadcasting: 1 I0126 21:13:14.621530 8 log.go:172] (0xc000e4bb80) Reply frame received for 1 I0126 21:13:14.621582 8 log.go:172] (0xc000e4bb80) (0xc0020c01e0) Create stream I0126 21:13:14.621595 8 log.go:172] (0xc000e4bb80) (0xc0020c01e0) Stream added, broadcasting: 3 I0126 21:13:14.622972 8 log.go:172] (0xc000e4bb80) Reply frame received for 3 I0126 21:13:14.623000 8 log.go:172] (0xc000e4bb80) (0xc0027f4000) Create stream I0126 21:13:14.623023 8 log.go:172] (0xc000e4bb80) (0xc0027f4000) Stream added, broadcasting: 5 I0126 21:13:14.624486 8 log.go:172] (0xc000e4bb80) Reply frame received for 5 I0126 21:13:14.676518 8 log.go:172] (0xc000e4bb80) Data frame received for 3 I0126 21:13:14.676556 8 log.go:172] (0xc0020c01e0) (3) Data frame handling I0126 21:13:14.676579 8 log.go:172] (0xc0020c01e0) (3) Data frame sent I0126 21:13:14.737460 8 log.go:172] (0xc000e4bb80) Data frame received for 1 I0126 21:13:14.737527 8 log.go:172] (0xc000e4bb80) (0xc0027f4000) Stream removed, broadcasting: 5 I0126 21:13:14.737570 8 log.go:172] (0xc00290a140) (1) Data frame handling I0126 21:13:14.737588 8 log.go:172] (0xc00290a140) (1) Data frame sent I0126 21:13:14.737598 8 log.go:172] (0xc000e4bb80) (0xc0020c01e0) Stream removed, broadcasting: 3 I0126 21:13:14.737662 8 log.go:172] (0xc000e4bb80) (0xc00290a140) Stream removed, broadcasting: 1 I0126 21:13:14.737681 8 log.go:172] (0xc000e4bb80) Go away received I0126 21:13:14.737790 8 log.go:172] (0xc000e4bb80) (0xc00290a140) Stream removed, broadcasting: 1 I0126 21:13:14.737800 8 log.go:172] (0xc000e4bb80) (0xc0020c01e0) Stream removed, broadcasting: 3 I0126 21:13:14.737814 8 log.go:172] (0xc000e4bb80) (0xc0027f4000) Stream removed, broadcasting: 5 Jan 26 21:13:14.737: INFO: Exec stderr: "" Jan 26 21:13:14.737: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2156 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 26 21:13:14.737: INFO: >>> kubeConfig: /root/.kube/config I0126 21:13:14.775406 8 log.go:172] (0xc00164c160) (0xc0027f4320) Create stream I0126 21:13:14.775482 8 log.go:172] (0xc00164c160) (0xc0027f4320) Stream added, broadcasting: 1 I0126 21:13:14.778120 8 log.go:172] (0xc00164c160) Reply frame received for 1 I0126 21:13:14.778152 8 log.go:172] (0xc00164c160) (0xc002784000) Create stream I0126 21:13:14.778162 8 log.go:172] (0xc00164c160) (0xc002784000) Stream added, broadcasting: 3 I0126 21:13:14.779028 8 log.go:172] (0xc00164c160) Reply frame received for 3 I0126 21:13:14.779050 8 log.go:172] (0xc00164c160) (0xc00290a1e0) Create stream I0126 21:13:14.779062 8 log.go:172] (0xc00164c160) (0xc00290a1e0) Stream added, broadcasting: 5 I0126 21:13:14.781152 8 log.go:172] (0xc00164c160) Reply frame received for 5 I0126 21:13:14.843230 8 log.go:172] (0xc00164c160) Data frame received for 3 I0126 21:13:14.843280 8 log.go:172] (0xc002784000) (3) Data frame handling I0126 21:13:14.843324 8 log.go:172] (0xc002784000) (3) Data frame sent I0126 21:13:14.932621 8 log.go:172] (0xc00164c160) Data frame received for 1 I0126 21:13:14.932714 8 log.go:172] (0xc00164c160) (0xc002784000) Stream removed, broadcasting: 3 I0126 21:13:14.932781 8 log.go:172] (0xc0027f4320) (1) Data frame handling I0126 21:13:14.932810 8 log.go:172] (0xc00164c160) (0xc00290a1e0) Stream removed, broadcasting: 5 I0126 21:13:14.932857 8 log.go:172] (0xc0027f4320) (1) Data frame sent I0126 21:13:14.932874 8 log.go:172] (0xc00164c160) (0xc0027f4320) Stream removed, broadcasting: 1 I0126 21:13:14.932911 8 log.go:172] (0xc00164c160) Go away received I0126 21:13:14.933336 8 log.go:172] (0xc00164c160) (0xc0027f4320) Stream removed, broadcasting: 1 I0126 21:13:14.933364 8 log.go:172] (0xc00164c160) (0xc002784000) Stream removed, broadcasting: 3 I0126 21:13:14.933377 8 log.go:172] (0xc00164c160) (0xc00290a1e0) Stream removed, broadcasting: 5 Jan 26 21:13:14.933: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:13:14.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-2156" for this suite. • [SLOW TEST:22.388 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":13,"skipped":304,"failed":0} [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:13:14.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-92869ee1-ddc4-46a2-988f-3dfffe6d0b0d STEP: Creating a pod to test consume configMaps Jan 26 21:13:15.152: INFO: Waiting up to 5m0s for pod "pod-configmaps-cfbbf772-c71a-473b-ba7e-c9a69857d01f" in namespace "configmap-987" to be "success or failure" Jan 26 21:13:15.163: INFO: Pod "pod-configmaps-cfbbf772-c71a-473b-ba7e-c9a69857d01f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.863582ms Jan 26 21:13:17.172: INFO: Pod "pod-configmaps-cfbbf772-c71a-473b-ba7e-c9a69857d01f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019611254s Jan 26 21:13:19.290: INFO: Pod "pod-configmaps-cfbbf772-c71a-473b-ba7e-c9a69857d01f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137677056s Jan 26 21:13:21.307: INFO: Pod "pod-configmaps-cfbbf772-c71a-473b-ba7e-c9a69857d01f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.155044433s Jan 26 21:13:23.314: INFO: Pod "pod-configmaps-cfbbf772-c71a-473b-ba7e-c9a69857d01f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.162016943s STEP: Saw pod success Jan 26 21:13:23.314: INFO: Pod "pod-configmaps-cfbbf772-c71a-473b-ba7e-c9a69857d01f" satisfied condition "success or failure" Jan 26 21:13:23.319: INFO: Trying to get logs from node jerma-server-mvvl6gufaqub pod pod-configmaps-cfbbf772-c71a-473b-ba7e-c9a69857d01f container configmap-volume-test: STEP: delete the pod Jan 26 21:13:23.386: INFO: Waiting for pod pod-configmaps-cfbbf772-c71a-473b-ba7e-c9a69857d01f to disappear Jan 26 21:13:23.392: INFO: Pod pod-configmaps-cfbbf772-c71a-473b-ba7e-c9a69857d01f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:13:23.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-987" for this suite. • [SLOW TEST:8.470 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":14,"skipped":304,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:13:23.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 26 21:13:23.645: INFO: Waiting up to 5m0s for pod "downwardapi-volume-edd360ff-e514-4229-a24d-0f8ff2669886" in namespace "downward-api-8367" to be "success or failure" Jan 26 21:13:23.683: INFO: Pod "downwardapi-volume-edd360ff-e514-4229-a24d-0f8ff2669886": Phase="Pending", Reason="", readiness=false. Elapsed: 38.023498ms Jan 26 21:13:26.190: INFO: Pod "downwardapi-volume-edd360ff-e514-4229-a24d-0f8ff2669886": Phase="Pending", Reason="", readiness=false. Elapsed: 2.544190687s Jan 26 21:13:28.452: INFO: Pod "downwardapi-volume-edd360ff-e514-4229-a24d-0f8ff2669886": Phase="Pending", Reason="", readiness=false. Elapsed: 4.806955032s Jan 26 21:13:30.468: INFO: Pod "downwardapi-volume-edd360ff-e514-4229-a24d-0f8ff2669886": Phase="Pending", Reason="", readiness=false. Elapsed: 6.822726254s Jan 26 21:13:32.476: INFO: Pod "downwardapi-volume-edd360ff-e514-4229-a24d-0f8ff2669886": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.830526699s STEP: Saw pod success Jan 26 21:13:32.476: INFO: Pod "downwardapi-volume-edd360ff-e514-4229-a24d-0f8ff2669886" satisfied condition "success or failure" Jan 26 21:13:32.481: INFO: Trying to get logs from node jerma-server-mvvl6gufaqub pod downwardapi-volume-edd360ff-e514-4229-a24d-0f8ff2669886 container client-container: STEP: delete the pod Jan 26 21:13:32.555: INFO: Waiting for pod downwardapi-volume-edd360ff-e514-4229-a24d-0f8ff2669886 to disappear Jan 26 21:13:32.567: INFO: Pod downwardapi-volume-edd360ff-e514-4229-a24d-0f8ff2669886 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:13:32.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8367" for this suite. • [SLOW TEST:9.150 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":15,"skipped":322,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:13:32.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:13:32.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9714" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":16,"skipped":323,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:13:32.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:13:34.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3626" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":17,"skipped":330,"failed":0} SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:13:34.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-ccw8 STEP: Creating a pod to test atomic-volume-subpath Jan 26 21:13:34.284: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-ccw8" in namespace "subpath-2945" to be "success or failure" Jan 26 21:13:34.288: INFO: Pod "pod-subpath-test-configmap-ccw8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010321ms Jan 26 21:13:36.403: INFO: Pod "pod-subpath-test-configmap-ccw8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118768053s Jan 26 21:13:39.161: INFO: Pod "pod-subpath-test-configmap-ccw8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.877161514s Jan 26 21:13:41.246: INFO: Pod "pod-subpath-test-configmap-ccw8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.962690924s Jan 26 21:13:43.253: INFO: Pod "pod-subpath-test-configmap-ccw8": Phase="Running", Reason="", readiness=true. Elapsed: 8.969062567s Jan 26 21:13:45.260: INFO: Pod "pod-subpath-test-configmap-ccw8": Phase="Running", Reason="", readiness=true. Elapsed: 10.976580966s Jan 26 21:13:47.266: INFO: Pod "pod-subpath-test-configmap-ccw8": Phase="Running", Reason="", readiness=true. Elapsed: 12.982495805s Jan 26 21:13:49.272: INFO: Pod "pod-subpath-test-configmap-ccw8": Phase="Running", Reason="", readiness=true. Elapsed: 14.988206644s Jan 26 21:13:51.278: INFO: Pod "pod-subpath-test-configmap-ccw8": Phase="Running", Reason="", readiness=true. Elapsed: 16.994611396s Jan 26 21:13:53.285: INFO: Pod "pod-subpath-test-configmap-ccw8": Phase="Running", Reason="", readiness=true. Elapsed: 19.001202688s Jan 26 21:13:55.292: INFO: Pod "pod-subpath-test-configmap-ccw8": Phase="Running", Reason="", readiness=true. Elapsed: 21.007811418s Jan 26 21:13:57.299: INFO: Pod "pod-subpath-test-configmap-ccw8": Phase="Running", Reason="", readiness=true. Elapsed: 23.015566153s Jan 26 21:13:59.308: INFO: Pod "pod-subpath-test-configmap-ccw8": Phase="Running", Reason="", readiness=true. Elapsed: 25.024152629s Jan 26 21:14:01.365: INFO: Pod "pod-subpath-test-configmap-ccw8": Phase="Running", Reason="", readiness=true. Elapsed: 27.081401211s Jan 26 21:14:03.381: INFO: Pod "pod-subpath-test-configmap-ccw8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.097277819s STEP: Saw pod success Jan 26 21:14:03.381: INFO: Pod "pod-subpath-test-configmap-ccw8" satisfied condition "success or failure" Jan 26 21:14:03.386: INFO: Trying to get logs from node jerma-server-mvvl6gufaqub pod pod-subpath-test-configmap-ccw8 container test-container-subpath-configmap-ccw8: STEP: delete the pod Jan 26 21:14:04.097: INFO: Waiting for pod pod-subpath-test-configmap-ccw8 to disappear Jan 26 21:14:04.114: INFO: Pod pod-subpath-test-configmap-ccw8 no longer exists STEP: Deleting pod pod-subpath-test-configmap-ccw8 Jan 26 21:14:04.114: INFO: Deleting pod "pod-subpath-test-configmap-ccw8" in namespace "subpath-2945" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:14:04.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2945" for this suite. • [SLOW TEST:30.267 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":18,"skipped":336,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:14:04.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Jan 26 21:14:15.194: INFO: Successfully updated pod "adopt-release-s4g2t" STEP: Checking that the Job readopts the Pod Jan 26 21:14:15.194: INFO: Waiting up to 15m0s for pod "adopt-release-s4g2t" in namespace "job-4107" to be "adopted" Jan 26 21:14:15.244: INFO: Pod "adopt-release-s4g2t": Phase="Running", Reason="", readiness=true. Elapsed: 50.478562ms Jan 26 21:14:17.339: INFO: Pod "adopt-release-s4g2t": Phase="Running", Reason="", readiness=true. Elapsed: 2.145454686s Jan 26 21:14:17.339: INFO: Pod "adopt-release-s4g2t" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Jan 26 21:14:17.863: INFO: Successfully updated pod "adopt-release-s4g2t" STEP: Checking that the Job releases the Pod Jan 26 21:14:17.864: INFO: Waiting up to 15m0s for pod "adopt-release-s4g2t" in namespace "job-4107" to be "released" Jan 26 21:14:17.996: INFO: Pod "adopt-release-s4g2t": Phase="Running", Reason="", readiness=true. Elapsed: 132.330955ms Jan 26 21:14:17.996: INFO: Pod "adopt-release-s4g2t" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:14:17.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4107" for this suite. • [SLOW TEST:13.550 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":19,"skipped":363,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:14:18.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-97fa6cac-4697-4a8a-871a-852834339ff1 STEP: Creating a pod to test consume secrets Jan 26 21:14:18.147: INFO: Waiting up to 5m0s for pod "pod-secrets-08e1a4e2-438c-446e-bcae-94d8d11ae7d6" in namespace "secrets-4280" to be "success or failure" Jan 26 21:14:18.171: INFO: Pod "pod-secrets-08e1a4e2-438c-446e-bcae-94d8d11ae7d6": Phase="Pending", Reason="", readiness=false. Elapsed: 23.731831ms Jan 26 21:14:20.177: INFO: Pod "pod-secrets-08e1a4e2-438c-446e-bcae-94d8d11ae7d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03040673s Jan 26 21:14:22.181: INFO: Pod "pod-secrets-08e1a4e2-438c-446e-bcae-94d8d11ae7d6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034547409s Jan 26 21:14:24.189: INFO: Pod "pod-secrets-08e1a4e2-438c-446e-bcae-94d8d11ae7d6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042388878s Jan 26 21:14:26.195: INFO: Pod "pod-secrets-08e1a4e2-438c-446e-bcae-94d8d11ae7d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.047872876s STEP: Saw pod success Jan 26 21:14:26.195: INFO: Pod "pod-secrets-08e1a4e2-438c-446e-bcae-94d8d11ae7d6" satisfied condition "success or failure" Jan 26 21:14:26.198: INFO: Trying to get logs from node jerma-node pod pod-secrets-08e1a4e2-438c-446e-bcae-94d8d11ae7d6 container secret-volume-test: STEP: delete the pod Jan 26 21:14:26.251: INFO: Waiting for pod pod-secrets-08e1a4e2-438c-446e-bcae-94d8d11ae7d6 to disappear Jan 26 21:14:26.261: INFO: Pod pod-secrets-08e1a4e2-438c-446e-bcae-94d8d11ae7d6 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:14:26.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4280" for this suite. • [SLOW TEST:8.252 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":20,"skipped":378,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:14:26.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 26 21:14:26.421: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-5567 I0126 21:14:26.447931 8 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-5567, replica count: 1 I0126 21:14:27.499187 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0126 21:14:28.499888 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0126 21:14:29.500294 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0126 21:14:30.500854 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0126 21:14:31.501286 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0126 21:14:32.501809 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0126 21:14:33.502297 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0126 21:14:34.503667 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 26 21:14:34.628: INFO: Created: latency-svc-kprhd Jan 26 21:14:35.184: INFO: Got endpoints: latency-svc-kprhd [579.597706ms] Jan 26 21:14:35.653: INFO: Created: latency-svc-vkbn5 Jan 26 21:14:35.662: INFO: Got endpoints: latency-svc-vkbn5 [477.988407ms] Jan 26 21:14:35.687: INFO: Created: latency-svc-72rzv Jan 26 21:14:35.702: INFO: Got endpoints: latency-svc-72rzv [517.819779ms] Jan 26 21:14:35.727: INFO: Created: latency-svc-qt5f2 Jan 26 21:14:35.771: INFO: Got endpoints: latency-svc-qt5f2 [584.88832ms] Jan 26 21:14:35.796: INFO: Created: latency-svc-6ftdf Jan 26 21:14:35.817: INFO: Got endpoints: latency-svc-6ftdf [631.183102ms] Jan 26 21:14:35.843: INFO: Created: latency-svc-78z7s Jan 26 21:14:35.855: INFO: Got endpoints: latency-svc-78z7s [668.625433ms] Jan 26 21:14:35.945: INFO: Created: latency-svc-g5br9 Jan 26 21:14:35.952: INFO: Got endpoints: latency-svc-g5br9 [765.923639ms] Jan 26 21:14:36.011: INFO: Created: latency-svc-nv98l Jan 26 21:14:36.017: INFO: Got endpoints: latency-svc-nv98l [830.447671ms] Jan 26 21:14:36.035: INFO: Created: latency-svc-zk56l Jan 26 21:14:36.080: INFO: Got endpoints: latency-svc-zk56l [892.747104ms] Jan 26 21:14:36.093: INFO: Created: latency-svc-4fvwr Jan 26 21:14:36.102: INFO: Got endpoints: latency-svc-4fvwr [917.202969ms] Jan 26 21:14:36.148: INFO: Created: latency-svc-wr7b7 Jan 26 21:14:36.149: INFO: Got endpoints: latency-svc-wr7b7 [961.815807ms] Jan 26 21:14:36.173: INFO: Created: latency-svc-c4gwx Jan 26 21:14:36.258: INFO: Got endpoints: latency-svc-c4gwx [1.070941421s] Jan 26 21:14:36.259: INFO: Created: latency-svc-7cjlc Jan 26 21:14:36.282: INFO: Got endpoints: latency-svc-7cjlc [1.09537767s] Jan 26 21:14:36.284: INFO: Created: latency-svc-fzt6m Jan 26 21:14:36.291: INFO: Got endpoints: latency-svc-fzt6m [1.103609558s] Jan 26 21:14:36.400: INFO: Created: latency-svc-z5r24 Jan 26 21:14:36.407: INFO: Got endpoints: latency-svc-z5r24 [1.220101254s] Jan 26 21:14:36.424: INFO: Created: latency-svc-k6vtp Jan 26 21:14:36.448: INFO: Got endpoints: latency-svc-k6vtp [1.260312626s] Jan 26 21:14:36.449: INFO: Created: latency-svc-xrjj2 Jan 26 21:14:36.470: INFO: Got endpoints: latency-svc-xrjj2 [808.081879ms] Jan 26 21:14:36.487: INFO: Created: latency-svc-2x9xk Jan 26 21:14:36.570: INFO: Got endpoints: latency-svc-2x9xk [867.601888ms] Jan 26 21:14:36.575: INFO: Created: latency-svc-jldx8 Jan 26 21:14:36.596: INFO: Got endpoints: latency-svc-jldx8 [824.730363ms] Jan 26 21:14:36.602: INFO: Created: latency-svc-wjbdn Jan 26 21:14:36.617: INFO: Got endpoints: latency-svc-wjbdn [799.126758ms] Jan 26 21:14:36.631: INFO: Created: latency-svc-7s97s Jan 26 21:14:36.644: INFO: Got endpoints: latency-svc-7s97s [788.36423ms] Jan 26 21:14:36.760: INFO: Created: latency-svc-bpnct Jan 26 21:14:36.774: INFO: Got endpoints: latency-svc-bpnct [820.582517ms] Jan 26 21:14:36.777: INFO: Created: latency-svc-kdk84 Jan 26 21:14:36.779: INFO: Got endpoints: latency-svc-kdk84 [762.174487ms] Jan 26 21:14:36.810: INFO: Created: latency-svc-q82xv Jan 26 21:14:36.838: INFO: Got endpoints: latency-svc-q82xv [758.463262ms] Jan 26 21:14:36.927: INFO: Created: latency-svc-wlmks Jan 26 21:14:36.947: INFO: Got endpoints: latency-svc-wlmks [844.703388ms] Jan 26 21:14:36.950: INFO: Created: latency-svc-lt2nh Jan 26 21:14:36.973: INFO: Got endpoints: latency-svc-lt2nh [823.745064ms] Jan 26 21:14:36.976: INFO: Created: latency-svc-d9xrm Jan 26 21:14:36.976: INFO: Got endpoints: latency-svc-d9xrm [717.791346ms] Jan 26 21:14:37.017: INFO: Created: latency-svc-wc7f5 Jan 26 21:14:37.019: INFO: Got endpoints: latency-svc-wc7f5 [736.69404ms] Jan 26 21:14:37.177: INFO: Created: latency-svc-rnl78 Jan 26 21:14:37.186: INFO: Got endpoints: latency-svc-rnl78 [895.850317ms] Jan 26 21:14:37.202: INFO: Created: latency-svc-645x5 Jan 26 21:14:37.208: INFO: Got endpoints: latency-svc-645x5 [800.772506ms] Jan 26 21:14:37.228: INFO: Created: latency-svc-b55j7 Jan 26 21:14:37.244: INFO: Got endpoints: latency-svc-b55j7 [796.106852ms] Jan 26 21:14:37.265: INFO: Created: latency-svc-c72fw Jan 26 21:14:37.327: INFO: Got endpoints: latency-svc-c72fw [856.425996ms] Jan 26 21:14:37.332: INFO: Created: latency-svc-4n8xw Jan 26 21:14:37.341: INFO: Got endpoints: latency-svc-4n8xw [769.890595ms] Jan 26 21:14:37.362: INFO: Created: latency-svc-4zp6z Jan 26 21:14:37.368: INFO: Got endpoints: latency-svc-4zp6z [772.028172ms] Jan 26 21:14:37.391: INFO: Created: latency-svc-45rw9 Jan 26 21:14:37.395: INFO: Got endpoints: latency-svc-45rw9 [777.580298ms] Jan 26 21:14:37.506: INFO: Created: latency-svc-nx2tb Jan 26 21:14:37.516: INFO: Got endpoints: latency-svc-nx2tb [871.810169ms] Jan 26 21:14:37.539: INFO: Created: latency-svc-qwcl9 Jan 26 21:14:37.553: INFO: Got endpoints: latency-svc-qwcl9 [779.818552ms] Jan 26 21:14:37.598: INFO: Created: latency-svc-zrvnl Jan 26 21:14:37.668: INFO: Got endpoints: latency-svc-zrvnl [889.085639ms] Jan 26 21:14:37.679: INFO: Created: latency-svc-s8hjd Jan 26 21:14:37.683: INFO: Got endpoints: latency-svc-s8hjd [844.327437ms] Jan 26 21:14:37.720: INFO: Created: latency-svc-btt8w Jan 26 21:14:37.756: INFO: Got endpoints: latency-svc-btt8w [808.969532ms] Jan 26 21:14:37.874: INFO: Created: latency-svc-w5mhv Jan 26 21:14:37.881: INFO: Got endpoints: latency-svc-w5mhv [908.043719ms] Jan 26 21:14:37.912: INFO: Created: latency-svc-cgnrn Jan 26 21:14:37.916: INFO: Got endpoints: latency-svc-cgnrn [939.634899ms] Jan 26 21:14:38.028: INFO: Created: latency-svc-5bk7c Jan 26 21:14:38.029: INFO: Got endpoints: latency-svc-5bk7c [1.010588699s] Jan 26 21:14:38.070: INFO: Created: latency-svc-mwqbw Jan 26 21:14:38.072: INFO: Got endpoints: latency-svc-mwqbw [885.800986ms] Jan 26 21:14:38.098: INFO: Created: latency-svc-q89dl Jan 26 21:14:38.106: INFO: Got endpoints: latency-svc-q89dl [897.857008ms] Jan 26 21:14:38.132: INFO: Created: latency-svc-z4b8h Jan 26 21:14:38.180: INFO: Got endpoints: latency-svc-z4b8h [936.390216ms] Jan 26 21:14:38.183: INFO: Created: latency-svc-8bw7n Jan 26 21:14:38.193: INFO: Got endpoints: latency-svc-8bw7n [865.331039ms] Jan 26 21:14:38.216: INFO: Created: latency-svc-d42s5 Jan 26 21:14:38.226: INFO: Got endpoints: latency-svc-d42s5 [885.23684ms] Jan 26 21:14:38.334: INFO: Created: latency-svc-c4bbz Jan 26 21:14:38.336: INFO: Got endpoints: latency-svc-c4bbz [967.371878ms] Jan 26 21:14:38.398: INFO: Created: latency-svc-c9sph Jan 26 21:14:38.401: INFO: Got endpoints: latency-svc-c9sph [1.006753757s] Jan 26 21:14:38.430: INFO: Created: latency-svc-qqcxw Jan 26 21:14:38.487: INFO: Got endpoints: latency-svc-qqcxw [971.056956ms] Jan 26 21:14:38.516: INFO: Created: latency-svc-9vfnt Jan 26 21:14:38.522: INFO: Got endpoints: latency-svc-9vfnt [967.818126ms] Jan 26 21:14:38.550: INFO: Created: latency-svc-k92hr Jan 26 21:14:38.557: INFO: Got endpoints: latency-svc-k92hr [888.853353ms] Jan 26 21:14:38.574: INFO: Created: latency-svc-8p4tf Jan 26 21:14:38.583: INFO: Got endpoints: latency-svc-8p4tf [900.038113ms] Jan 26 21:14:38.627: INFO: Created: latency-svc-dsj8f Jan 26 21:14:38.629: INFO: Got endpoints: latency-svc-dsj8f [873.273235ms] Jan 26 21:14:38.660: INFO: Created: latency-svc-b668r Jan 26 21:14:38.761: INFO: Got endpoints: latency-svc-b668r [879.887295ms] Jan 26 21:14:38.762: INFO: Created: latency-svc-6gcrr Jan 26 21:14:38.764: INFO: Got endpoints: latency-svc-6gcrr [847.661955ms] Jan 26 21:14:38.814: INFO: Created: latency-svc-ltbht Jan 26 21:14:38.823: INFO: Created: latency-svc-2vspw Jan 26 21:14:38.823: INFO: Got endpoints: latency-svc-ltbht [793.763574ms] Jan 26 21:14:38.841: INFO: Got endpoints: latency-svc-2vspw [768.019342ms] Jan 26 21:14:38.842: INFO: Created: latency-svc-kln62 Jan 26 21:14:38.933: INFO: Created: latency-svc-t7fhr Jan 26 21:14:38.934: INFO: Got endpoints: latency-svc-kln62 [828.083384ms] Jan 26 21:14:38.942: INFO: Got endpoints: latency-svc-t7fhr [761.490474ms] Jan 26 21:14:38.971: INFO: Created: latency-svc-pgd2g Jan 26 21:14:38.988: INFO: Got endpoints: latency-svc-pgd2g [795.511242ms] Jan 26 21:14:38.991: INFO: Created: latency-svc-j6cr4 Jan 26 21:14:38.994: INFO: Got endpoints: latency-svc-j6cr4 [768.282173ms] Jan 26 21:14:39.093: INFO: Created: latency-svc-x7d7m Jan 26 21:14:39.115: INFO: Created: latency-svc-cx52k Jan 26 21:14:39.118: INFO: Got endpoints: latency-svc-x7d7m [781.958589ms] Jan 26 21:14:39.122: INFO: Got endpoints: latency-svc-cx52k [720.399892ms] Jan 26 21:14:39.140: INFO: Created: latency-svc-vmmhp Jan 26 21:14:39.190: INFO: Got endpoints: latency-svc-vmmhp [702.559387ms] Jan 26 21:14:39.194: INFO: Created: latency-svc-xbn6b Jan 26 21:14:39.248: INFO: Got endpoints: latency-svc-xbn6b [726.055077ms] Jan 26 21:14:39.270: INFO: Created: latency-svc-7dqhm Jan 26 21:14:39.277: INFO: Got endpoints: latency-svc-7dqhm [719.033272ms] Jan 26 21:14:39.296: INFO: Created: latency-svc-qr9f9 Jan 26 21:14:39.314: INFO: Got endpoints: latency-svc-qr9f9 [730.764027ms] Jan 26 21:14:39.315: INFO: Created: latency-svc-d4bnq Jan 26 21:14:39.447: INFO: Got endpoints: latency-svc-d4bnq [817.987582ms] Jan 26 21:14:39.495: INFO: Created: latency-svc-b8hs2 Jan 26 21:14:39.503: INFO: Got endpoints: latency-svc-b8hs2 [741.748423ms] Jan 26 21:14:39.532: INFO: Created: latency-svc-dj6gr Jan 26 21:14:39.539: INFO: Got endpoints: latency-svc-dj6gr [774.783187ms] Jan 26 21:14:39.583: INFO: Created: latency-svc-tqcdx Jan 26 21:14:39.601: INFO: Got endpoints: latency-svc-tqcdx [777.254449ms] Jan 26 21:14:39.659: INFO: Created: latency-svc-bwtgl Jan 26 21:14:39.665: INFO: Got endpoints: latency-svc-bwtgl [824.033846ms] Jan 26 21:14:39.741: INFO: Created: latency-svc-sx5sr Jan 26 21:14:39.750: INFO: Got endpoints: latency-svc-sx5sr [816.371296ms] Jan 26 21:14:39.795: INFO: Created: latency-svc-6jmqv Jan 26 21:14:39.807: INFO: Got endpoints: latency-svc-6jmqv [864.794923ms] Jan 26 21:14:39.984: INFO: Created: latency-svc-l78rv Jan 26 21:14:39.997: INFO: Got endpoints: latency-svc-l78rv [1.008685218s] Jan 26 21:14:40.042: INFO: Created: latency-svc-l4mn4 Jan 26 21:14:40.056: INFO: Got endpoints: latency-svc-l4mn4 [1.061841882s] Jan 26 21:14:40.087: INFO: Created: latency-svc-hdhjp Jan 26 21:14:40.145: INFO: Got endpoints: latency-svc-hdhjp [1.027069921s] Jan 26 21:14:40.162: INFO: Created: latency-svc-s4mxn Jan 26 21:14:40.210: INFO: Created: latency-svc-k4b2x Jan 26 21:14:40.213: INFO: Got endpoints: latency-svc-s4mxn [1.091285378s] Jan 26 21:14:40.244: INFO: Got endpoints: latency-svc-k4b2x [1.054181708s] Jan 26 21:14:40.274: INFO: Created: latency-svc-r7q77 Jan 26 21:14:40.282: INFO: Got endpoints: latency-svc-r7q77 [1.034047044s] Jan 26 21:14:40.304: INFO: Created: latency-svc-f5ffc Jan 26 21:14:40.324: INFO: Got endpoints: latency-svc-f5ffc [1.047471405s] Jan 26 21:14:40.411: INFO: Created: latency-svc-9xrkp Jan 26 21:14:40.420: INFO: Got endpoints: latency-svc-9xrkp [1.106048645s] Jan 26 21:14:40.443: INFO: Created: latency-svc-v7wt2 Jan 26 21:14:40.450: INFO: Got endpoints: latency-svc-v7wt2 [1.002566723s] Jan 26 21:14:40.473: INFO: Created: latency-svc-2bl68 Jan 26 21:14:40.480: INFO: Got endpoints: latency-svc-2bl68 [976.84606ms] Jan 26 21:14:40.511: INFO: Created: latency-svc-8nddb Jan 26 21:14:40.575: INFO: Created: latency-svc-7h5mg Jan 26 21:14:40.583: INFO: Got endpoints: latency-svc-8nddb [1.044357161s] Jan 26 21:14:40.584: INFO: Got endpoints: latency-svc-7h5mg [983.285686ms] Jan 26 21:14:40.606: INFO: Created: latency-svc-6rlq2 Jan 26 21:14:40.621: INFO: Got endpoints: latency-svc-6rlq2 [955.826345ms] Jan 26 21:14:40.624: INFO: Created: latency-svc-x9wmv Jan 26 21:14:40.642: INFO: Got endpoints: latency-svc-x9wmv [891.971012ms] Jan 26 21:14:40.721: INFO: Created: latency-svc-7prlw Jan 26 21:14:40.724: INFO: Got endpoints: latency-svc-7prlw [917.059449ms] Jan 26 21:14:40.746: INFO: Created: latency-svc-8rx9w Jan 26 21:14:40.768: INFO: Got endpoints: latency-svc-8rx9w [770.986426ms] Jan 26 21:14:40.770: INFO: Created: latency-svc-vx8qs Jan 26 21:14:40.798: INFO: Got endpoints: latency-svc-vx8qs [741.888405ms] Jan 26 21:14:40.889: INFO: Created: latency-svc-crh6c Jan 26 21:14:40.941: INFO: Got endpoints: latency-svc-crh6c [796.350737ms] Jan 26 21:14:40.945: INFO: Created: latency-svc-66dlb Jan 26 21:14:40.949: INFO: Got endpoints: latency-svc-66dlb [735.805499ms] Jan 26 21:14:40.970: INFO: Created: latency-svc-d6ggp Jan 26 21:14:40.972: INFO: Got endpoints: latency-svc-d6ggp [728.061053ms] Jan 26 21:14:41.068: INFO: Created: latency-svc-tbz2h Jan 26 21:14:41.132: INFO: Got endpoints: latency-svc-tbz2h [850.34864ms] Jan 26 21:14:41.137: INFO: Created: latency-svc-v22s2 Jan 26 21:14:41.138: INFO: Got endpoints: latency-svc-v22s2 [813.999006ms] Jan 26 21:14:41.195: INFO: Created: latency-svc-j79tr Jan 26 21:14:41.203: INFO: Got endpoints: latency-svc-j79tr [782.610438ms] Jan 26 21:14:41.223: INFO: Created: latency-svc-b2wfp Jan 26 21:14:41.226: INFO: Got endpoints: latency-svc-b2wfp [775.734978ms] Jan 26 21:14:41.244: INFO: Created: latency-svc-7v26h Jan 26 21:14:41.251: INFO: Got endpoints: latency-svc-7v26h [771.793727ms] Jan 26 21:14:41.277: INFO: Created: latency-svc-jnzv8 Jan 26 21:14:41.282: INFO: Got endpoints: latency-svc-jnzv8 [697.745525ms] Jan 26 21:14:41.355: INFO: Created: latency-svc-wflpc Jan 26 21:14:41.380: INFO: Got endpoints: latency-svc-wflpc [796.644768ms] Jan 26 21:14:41.388: INFO: Created: latency-svc-gfgqm Jan 26 21:14:41.407: INFO: Got endpoints: latency-svc-gfgqm [785.921777ms] Jan 26 21:14:41.507: INFO: Created: latency-svc-g2ksh Jan 26 21:14:41.530: INFO: Got endpoints: latency-svc-g2ksh [887.072332ms] Jan 26 21:14:41.533: INFO: Created: latency-svc-f4bfh Jan 26 21:14:41.541: INFO: Got endpoints: latency-svc-f4bfh [816.284243ms] Jan 26 21:14:41.573: INFO: Created: latency-svc-dql4r Jan 26 21:14:41.583: INFO: Got endpoints: latency-svc-dql4r [814.650368ms] Jan 26 21:14:41.648: INFO: Created: latency-svc-qd26p Jan 26 21:14:41.648: INFO: Got endpoints: latency-svc-qd26p [850.005498ms] Jan 26 21:14:41.674: INFO: Created: latency-svc-wjb2m Jan 26 21:14:41.685: INFO: Got endpoints: latency-svc-wjb2m [743.396662ms] Jan 26 21:14:41.708: INFO: Created: latency-svc-ftwkc Jan 26 21:14:41.720: INFO: Got endpoints: latency-svc-ftwkc [770.766792ms] Jan 26 21:14:41.817: INFO: Created: latency-svc-drnx6 Jan 26 21:14:41.832: INFO: Got endpoints: latency-svc-drnx6 [859.725272ms] Jan 26 21:14:41.862: INFO: Created: latency-svc-7fhjl Jan 26 21:14:41.900: INFO: Created: latency-svc-hbjsv Jan 26 21:14:41.901: INFO: Got endpoints: latency-svc-7fhjl [768.732111ms] Jan 26 21:14:41.911: INFO: Got endpoints: latency-svc-hbjsv [772.139339ms] Jan 26 21:14:41.976: INFO: Created: latency-svc-5bms7 Jan 26 21:14:41.985: INFO: Got endpoints: latency-svc-5bms7 [781.623792ms] Jan 26 21:14:42.003: INFO: Created: latency-svc-gnpvl Jan 26 21:14:42.007: INFO: Got endpoints: latency-svc-gnpvl [780.562328ms] Jan 26 21:14:42.027: INFO: Created: latency-svc-qd7nf Jan 26 21:14:42.036: INFO: Got endpoints: latency-svc-qd7nf [784.004431ms] Jan 26 21:14:42.151: INFO: Created: latency-svc-hwzv4 Jan 26 21:14:42.159: INFO: Got endpoints: latency-svc-hwzv4 [877.127787ms] Jan 26 21:14:42.221: INFO: Created: latency-svc-h5hh6 Jan 26 21:14:42.229: INFO: Got endpoints: latency-svc-h5hh6 [848.427003ms] Jan 26 21:14:42.301: INFO: Created: latency-svc-p9pkp Jan 26 21:14:42.302: INFO: Got endpoints: latency-svc-p9pkp [894.597205ms] Jan 26 21:14:42.348: INFO: Created: latency-svc-mzhmr Jan 26 21:14:42.348: INFO: Got endpoints: latency-svc-mzhmr [818.116656ms] Jan 26 21:14:42.368: INFO: Created: latency-svc-cdmqw Jan 26 21:14:42.373: INFO: Got endpoints: latency-svc-cdmqw [832.457999ms] Jan 26 21:14:42.392: INFO: Created: latency-svc-b86dg Jan 26 21:14:42.438: INFO: Got endpoints: latency-svc-b86dg [855.136387ms] Jan 26 21:14:42.460: INFO: Created: latency-svc-fqcpz Jan 26 21:14:42.465: INFO: Got endpoints: latency-svc-fqcpz [816.281324ms] Jan 26 21:14:42.484: INFO: Created: latency-svc-9nvh5 Jan 26 21:14:42.489: INFO: Got endpoints: latency-svc-9nvh5 [804.110386ms] Jan 26 21:14:42.591: INFO: Created: latency-svc-bdwcl Jan 26 21:14:42.596: INFO: Got endpoints: latency-svc-bdwcl [875.785274ms] Jan 26 21:14:42.643: INFO: Created: latency-svc-zg65z Jan 26 21:14:42.651: INFO: Got endpoints: latency-svc-zg65z [819.16481ms] Jan 26 21:14:42.674: INFO: Created: latency-svc-rtbnk Jan 26 21:14:42.687: INFO: Got endpoints: latency-svc-rtbnk [785.967954ms] Jan 26 21:14:42.728: INFO: Created: latency-svc-66lc9 Jan 26 21:14:42.730: INFO: Got endpoints: latency-svc-66lc9 [819.584866ms] Jan 26 21:14:42.755: INFO: Created: latency-svc-rrzkc Jan 26 21:14:42.772: INFO: Got endpoints: latency-svc-rrzkc [786.904511ms] Jan 26 21:14:42.774: INFO: Created: latency-svc-frw4m Jan 26 21:14:42.776: INFO: Got endpoints: latency-svc-frw4m [769.270624ms] Jan 26 21:14:42.818: INFO: Created: latency-svc-ng98q Jan 26 21:14:42.900: INFO: Got endpoints: latency-svc-ng98q [864.677529ms] Jan 26 21:14:42.907: INFO: Created: latency-svc-ddrvd Jan 26 21:14:42.924: INFO: Got endpoints: latency-svc-ddrvd [765.032613ms] Jan 26 21:14:42.953: INFO: Created: latency-svc-bqsr2 Jan 26 21:14:42.960: INFO: Got endpoints: latency-svc-bqsr2 [59.510864ms] Jan 26 21:14:42.982: INFO: Created: latency-svc-8rv9w Jan 26 21:14:42.991: INFO: Got endpoints: latency-svc-8rv9w [761.883639ms] Jan 26 21:14:43.059: INFO: Created: latency-svc-t9whf Jan 26 21:14:43.061: INFO: Got endpoints: latency-svc-t9whf [759.532584ms] Jan 26 21:14:43.093: INFO: Created: latency-svc-gsspr Jan 26 21:14:43.112: INFO: Created: latency-svc-lbgph Jan 26 21:14:43.118: INFO: Got endpoints: latency-svc-gsspr [769.609979ms] Jan 26 21:14:43.131: INFO: Got endpoints: latency-svc-lbgph [757.363187ms] Jan 26 21:14:43.148: INFO: Created: latency-svc-69v7t Jan 26 21:14:43.264: INFO: Got endpoints: latency-svc-69v7t [825.758032ms] Jan 26 21:14:43.271: INFO: Created: latency-svc-bk5rp Jan 26 21:14:43.281: INFO: Got endpoints: latency-svc-bk5rp [816.555591ms] Jan 26 21:14:43.302: INFO: Created: latency-svc-kmxh9 Jan 26 21:14:43.307: INFO: Got endpoints: latency-svc-kmxh9 [817.232727ms] Jan 26 21:14:43.327: INFO: Created: latency-svc-7kxl9 Jan 26 21:14:43.331: INFO: Got endpoints: latency-svc-7kxl9 [734.907857ms] Jan 26 21:14:43.352: INFO: Created: latency-svc-rclv7 Jan 26 21:14:43.360: INFO: Got endpoints: latency-svc-rclv7 [708.215967ms] Jan 26 21:14:43.408: INFO: Created: latency-svc-9gvhl Jan 26 21:14:43.414: INFO: Got endpoints: latency-svc-9gvhl [726.526615ms] Jan 26 21:14:43.435: INFO: Created: latency-svc-qfnxn Jan 26 21:14:43.452: INFO: Got endpoints: latency-svc-qfnxn [721.353957ms] Jan 26 21:14:43.488: INFO: Created: latency-svc-thlsc Jan 26 21:14:43.492: INFO: Got endpoints: latency-svc-thlsc [720.099904ms] Jan 26 21:14:43.637: INFO: Created: latency-svc-lk24q Jan 26 21:14:43.661: INFO: Got endpoints: latency-svc-lk24q [884.664379ms] Jan 26 21:14:43.692: INFO: Created: latency-svc-rzkdx Jan 26 21:14:43.705: INFO: Got endpoints: latency-svc-rzkdx [780.420562ms] Jan 26 21:14:43.730: INFO: Created: latency-svc-jmpht Jan 26 21:14:43.762: INFO: Got endpoints: latency-svc-jmpht [801.479755ms] Jan 26 21:14:43.774: INFO: Created: latency-svc-mxrnk Jan 26 21:14:43.794: INFO: Got endpoints: latency-svc-mxrnk [802.60624ms] Jan 26 21:14:43.819: INFO: Created: latency-svc-tcs4d Jan 26 21:14:43.826: INFO: Got endpoints: latency-svc-tcs4d [765.148187ms] Jan 26 21:14:43.863: INFO: Created: latency-svc-66b7p Jan 26 21:14:43.943: INFO: Got endpoints: latency-svc-66b7p [825.151663ms] Jan 26 21:14:43.994: INFO: Created: latency-svc-7tnhz Jan 26 21:14:44.113: INFO: Got endpoints: latency-svc-7tnhz [981.749427ms] Jan 26 21:14:44.143: INFO: Created: latency-svc-mlzw6 Jan 26 21:14:44.172: INFO: Created: latency-svc-rnqwx Jan 26 21:14:44.172: INFO: Got endpoints: latency-svc-mlzw6 [907.708973ms] Jan 26 21:14:44.178: INFO: Got endpoints: latency-svc-rnqwx [896.480703ms] Jan 26 21:14:44.253: INFO: Created: latency-svc-xc58x Jan 26 21:14:44.279: INFO: Created: latency-svc-7vnxf Jan 26 21:14:44.279: INFO: Got endpoints: latency-svc-xc58x [972.2781ms] Jan 26 21:14:44.323: INFO: Got endpoints: latency-svc-7vnxf [991.907593ms] Jan 26 21:14:44.327: INFO: Created: latency-svc-kqgjn Jan 26 21:14:44.332: INFO: Got endpoints: latency-svc-kqgjn [972.082711ms] Jan 26 21:14:44.401: INFO: Created: latency-svc-hzxvw Jan 26 21:14:44.407: INFO: Got endpoints: latency-svc-hzxvw [992.57212ms] Jan 26 21:14:44.554: INFO: Created: latency-svc-9g9tg Jan 26 21:14:44.567: INFO: Got endpoints: latency-svc-9g9tg [1.115349157s] Jan 26 21:14:44.629: INFO: Created: latency-svc-v8lzz Jan 26 21:14:44.644: INFO: Got endpoints: latency-svc-v8lzz [1.152360582s] Jan 26 21:14:44.713: INFO: Created: latency-svc-zpn6d Jan 26 21:14:44.723: INFO: Got endpoints: latency-svc-zpn6d [1.061704385s] Jan 26 21:14:44.761: INFO: Created: latency-svc-65968 Jan 26 21:14:44.791: INFO: Got endpoints: latency-svc-65968 [1.085885768s] Jan 26 21:14:44.794: INFO: Created: latency-svc-zrm6t Jan 26 21:14:44.853: INFO: Got endpoints: latency-svc-zrm6t [1.091759649s] Jan 26 21:14:44.871: INFO: Created: latency-svc-qd852 Jan 26 21:14:44.871: INFO: Got endpoints: latency-svc-qd852 [1.077422294s] Jan 26 21:14:44.894: INFO: Created: latency-svc-zgsjs Jan 26 21:14:44.902: INFO: Got endpoints: latency-svc-zgsjs [1.07514164s] Jan 26 21:14:44.992: INFO: Created: latency-svc-9pnfb Jan 26 21:14:44.994: INFO: Got endpoints: latency-svc-9pnfb [1.050954796s] Jan 26 21:14:45.019: INFO: Created: latency-svc-m4w7d Jan 26 21:14:45.024: INFO: Got endpoints: latency-svc-m4w7d [911.238913ms] Jan 26 21:14:45.057: INFO: Created: latency-svc-kvqb4 Jan 26 21:14:45.059: INFO: Got endpoints: latency-svc-kvqb4 [886.967338ms] Jan 26 21:14:45.177: INFO: Created: latency-svc-9zhg9 Jan 26 21:14:45.179: INFO: Got endpoints: latency-svc-9zhg9 [1.00128025s] Jan 26 21:14:45.252: INFO: Created: latency-svc-6zk7m Jan 26 21:14:45.253: INFO: Got endpoints: latency-svc-6zk7m [973.521776ms] Jan 26 21:14:45.360: INFO: Created: latency-svc-7nkql Jan 26 21:14:45.363: INFO: Got endpoints: latency-svc-7nkql [1.039926596s] Jan 26 21:14:45.546: INFO: Created: latency-svc-mr68t Jan 26 21:14:45.802: INFO: Got endpoints: latency-svc-mr68t [1.469838527s] Jan 26 21:14:45.814: INFO: Created: latency-svc-w7cnt Jan 26 21:14:45.863: INFO: Got endpoints: latency-svc-w7cnt [1.455819038s] Jan 26 21:14:45.867: INFO: Created: latency-svc-q86v8 Jan 26 21:14:45.890: INFO: Got endpoints: latency-svc-q86v8 [1.322266015s] Jan 26 21:14:46.032: INFO: Created: latency-svc-gl4kw Jan 26 21:14:46.040: INFO: Got endpoints: latency-svc-gl4kw [1.395382829s] Jan 26 21:14:46.072: INFO: Created: latency-svc-j4mqd Jan 26 21:14:46.077: INFO: Got endpoints: latency-svc-j4mqd [1.353850824s] Jan 26 21:14:46.148: INFO: Created: latency-svc-xtnm7 Jan 26 21:14:46.173: INFO: Created: latency-svc-h5mkr Jan 26 21:14:46.176: INFO: Got endpoints: latency-svc-xtnm7 [1.384555956s] Jan 26 21:14:46.197: INFO: Got endpoints: latency-svc-h5mkr [1.343103531s] Jan 26 21:14:46.231: INFO: Created: latency-svc-wr2jn Jan 26 21:14:46.238: INFO: Got endpoints: latency-svc-wr2jn [1.366856097s] Jan 26 21:14:46.333: INFO: Created: latency-svc-pqjnq Jan 26 21:14:46.340: INFO: Got endpoints: latency-svc-pqjnq [1.437747077s] Jan 26 21:14:46.368: INFO: Created: latency-svc-9ct6k Jan 26 21:14:46.380: INFO: Got endpoints: latency-svc-9ct6k [1.385987928s] Jan 26 21:14:46.499: INFO: Created: latency-svc-t8f52 Jan 26 21:14:46.530: INFO: Got endpoints: latency-svc-t8f52 [1.505516124s] Jan 26 21:14:46.535: INFO: Created: latency-svc-bd2sv Jan 26 21:14:46.541: INFO: Got endpoints: latency-svc-bd2sv [1.481198484s] Jan 26 21:14:46.654: INFO: Created: latency-svc-tmzpg Jan 26 21:14:46.655: INFO: Got endpoints: latency-svc-tmzpg [1.475609579s] Jan 26 21:14:46.685: INFO: Created: latency-svc-jzfr2 Jan 26 21:14:46.688: INFO: Got endpoints: latency-svc-jzfr2 [1.434670629s] Jan 26 21:14:46.701: INFO: Created: latency-svc-qnzw8 Jan 26 21:14:46.707: INFO: Got endpoints: latency-svc-qnzw8 [1.344291097s] Jan 26 21:14:46.818: INFO: Created: latency-svc-p9tsz Jan 26 21:14:46.823: INFO: Got endpoints: latency-svc-p9tsz [1.020694736s] Jan 26 21:14:46.839: INFO: Created: latency-svc-n2jfs Jan 26 21:14:46.849: INFO: Got endpoints: latency-svc-n2jfs [986.737038ms] Jan 26 21:14:46.890: INFO: Created: latency-svc-vbwxp Jan 26 21:14:46.962: INFO: Got endpoints: latency-svc-vbwxp [1.07171197s] Jan 26 21:14:46.986: INFO: Created: latency-svc-xtckk Jan 26 21:14:46.987: INFO: Got endpoints: latency-svc-xtckk [946.470801ms] Jan 26 21:14:47.009: INFO: Created: latency-svc-6gxwb Jan 26 21:14:47.011: INFO: Got endpoints: latency-svc-6gxwb [933.646659ms] Jan 26 21:14:47.029: INFO: Created: latency-svc-zmvjz Jan 26 21:14:47.036: INFO: Got endpoints: latency-svc-zmvjz [859.916753ms] Jan 26 21:14:47.142: INFO: Created: latency-svc-wcq7g Jan 26 21:14:47.147: INFO: Got endpoints: latency-svc-wcq7g [949.963385ms] Jan 26 21:14:47.176: INFO: Created: latency-svc-xw86q Jan 26 21:14:47.186: INFO: Got endpoints: latency-svc-xw86q [947.563468ms] Jan 26 21:14:47.229: INFO: Created: latency-svc-5t5cb Jan 26 21:14:47.301: INFO: Got endpoints: latency-svc-5t5cb [961.692526ms] Jan 26 21:14:47.319: INFO: Created: latency-svc-nwknz Jan 26 21:14:47.319: INFO: Got endpoints: latency-svc-nwknz [939.270466ms] Jan 26 21:14:47.354: INFO: Created: latency-svc-hm6qq Jan 26 21:14:47.365: INFO: Got endpoints: latency-svc-hm6qq [834.635081ms] Jan 26 21:14:47.389: INFO: Created: latency-svc-spshx Jan 26 21:14:47.398: INFO: Got endpoints: latency-svc-spshx [857.194495ms] Jan 26 21:14:47.544: INFO: Created: latency-svc-zwc5s Jan 26 21:14:47.545: INFO: Created: latency-svc-h6fdh Jan 26 21:14:47.545: INFO: Got endpoints: latency-svc-h6fdh [890.075575ms] Jan 26 21:14:47.601: INFO: Got endpoints: latency-svc-zwc5s [913.264727ms] Jan 26 21:14:47.606: INFO: Created: latency-svc-h5gkc Jan 26 21:14:47.614: INFO: Got endpoints: latency-svc-h5gkc [906.914014ms] Jan 26 21:14:47.615: INFO: Latencies: [59.510864ms 477.988407ms 517.819779ms 584.88832ms 631.183102ms 668.625433ms 697.745525ms 702.559387ms 708.215967ms 717.791346ms 719.033272ms 720.099904ms 720.399892ms 721.353957ms 726.055077ms 726.526615ms 728.061053ms 730.764027ms 734.907857ms 735.805499ms 736.69404ms 741.748423ms 741.888405ms 743.396662ms 757.363187ms 758.463262ms 759.532584ms 761.490474ms 761.883639ms 762.174487ms 765.032613ms 765.148187ms 765.923639ms 768.019342ms 768.282173ms 768.732111ms 769.270624ms 769.609979ms 769.890595ms 770.766792ms 770.986426ms 771.793727ms 772.028172ms 772.139339ms 774.783187ms 775.734978ms 777.254449ms 777.580298ms 779.818552ms 780.420562ms 780.562328ms 781.623792ms 781.958589ms 782.610438ms 784.004431ms 785.921777ms 785.967954ms 786.904511ms 788.36423ms 793.763574ms 795.511242ms 796.106852ms 796.350737ms 796.644768ms 799.126758ms 800.772506ms 801.479755ms 802.60624ms 804.110386ms 808.081879ms 808.969532ms 813.999006ms 814.650368ms 816.281324ms 816.284243ms 816.371296ms 816.555591ms 817.232727ms 817.987582ms 818.116656ms 819.16481ms 819.584866ms 820.582517ms 823.745064ms 824.033846ms 824.730363ms 825.151663ms 825.758032ms 828.083384ms 830.447671ms 832.457999ms 834.635081ms 844.327437ms 844.703388ms 847.661955ms 848.427003ms 850.005498ms 850.34864ms 855.136387ms 856.425996ms 857.194495ms 859.725272ms 859.916753ms 864.677529ms 864.794923ms 865.331039ms 867.601888ms 871.810169ms 873.273235ms 875.785274ms 877.127787ms 879.887295ms 884.664379ms 885.23684ms 885.800986ms 886.967338ms 887.072332ms 888.853353ms 889.085639ms 890.075575ms 891.971012ms 892.747104ms 894.597205ms 895.850317ms 896.480703ms 897.857008ms 900.038113ms 906.914014ms 907.708973ms 908.043719ms 911.238913ms 913.264727ms 917.059449ms 917.202969ms 933.646659ms 936.390216ms 939.270466ms 939.634899ms 946.470801ms 947.563468ms 949.963385ms 955.826345ms 961.692526ms 961.815807ms 967.371878ms 967.818126ms 971.056956ms 972.082711ms 972.2781ms 973.521776ms 976.84606ms 981.749427ms 983.285686ms 986.737038ms 991.907593ms 992.57212ms 1.00128025s 1.002566723s 1.006753757s 1.008685218s 1.010588699s 1.020694736s 1.027069921s 1.034047044s 1.039926596s 1.044357161s 1.047471405s 1.050954796s 1.054181708s 1.061704385s 1.061841882s 1.070941421s 1.07171197s 1.07514164s 1.077422294s 1.085885768s 1.091285378s 1.091759649s 1.09537767s 1.103609558s 1.106048645s 1.115349157s 1.152360582s 1.220101254s 1.260312626s 1.322266015s 1.343103531s 1.344291097s 1.353850824s 1.366856097s 1.384555956s 1.385987928s 1.395382829s 1.434670629s 1.437747077s 1.455819038s 1.469838527s 1.475609579s 1.481198484s 1.505516124s] Jan 26 21:14:47.615: INFO: 50 %ile: 857.194495ms Jan 26 21:14:47.615: INFO: 90 %ile: 1.106048645s Jan 26 21:14:47.615: INFO: 99 %ile: 1.481198484s Jan 26 21:14:47.615: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:14:47.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-5567" for this suite. • [SLOW TEST:21.355 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":21,"skipped":397,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:14:47.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Jan 26 21:14:47.680: INFO: namespace kubectl-915 Jan 26 21:14:47.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-915' Jan 26 21:14:48.130: INFO: stderr: "" Jan 26 21:14:48.130: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jan 26 21:14:49.170: INFO: Selector matched 1 pods for map[app:agnhost] Jan 26 21:14:49.170: INFO: Found 0 / 1 Jan 26 21:14:50.144: INFO: Selector matched 1 pods for map[app:agnhost] Jan 26 21:14:50.144: INFO: Found 0 / 1 Jan 26 21:14:51.139: INFO: Selector matched 1 pods for map[app:agnhost] Jan 26 21:14:51.139: INFO: Found 0 / 1 Jan 26 21:14:52.144: INFO: Selector matched 1 pods for map[app:agnhost] Jan 26 21:14:52.144: INFO: Found 0 / 1 Jan 26 21:14:53.203: INFO: Selector matched 1 pods for map[app:agnhost] Jan 26 21:14:53.203: INFO: Found 0 / 1 Jan 26 21:14:54.225: INFO: Selector matched 1 pods for map[app:agnhost] Jan 26 21:14:54.225: INFO: Found 0 / 1 Jan 26 21:14:55.184: INFO: Selector matched 1 pods for map[app:agnhost] Jan 26 21:14:55.185: INFO: Found 0 / 1 Jan 26 21:14:56.197: INFO: Selector matched 1 pods for map[app:agnhost] Jan 26 21:14:56.197: INFO: Found 0 / 1 Jan 26 21:14:57.168: INFO: Selector matched 1 pods for map[app:agnhost] Jan 26 21:14:57.168: INFO: Found 0 / 1 Jan 26 21:14:58.153: INFO: Selector matched 1 pods for map[app:agnhost] Jan 26 21:14:58.153: INFO: Found 0 / 1 Jan 26 21:14:59.234: INFO: Selector matched 1 pods for map[app:agnhost] Jan 26 21:14:59.234: INFO: Found 0 / 1 Jan 26 21:15:00.157: INFO: Selector matched 1 pods for map[app:agnhost] Jan 26 21:15:00.157: INFO: Found 1 / 1 Jan 26 21:15:00.157: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 26 21:15:00.175: INFO: Selector matched 1 pods for map[app:agnhost] Jan 26 21:15:00.175: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 26 21:15:00.175: INFO: wait on agnhost-master startup in kubectl-915 Jan 26 21:15:00.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-h52fh agnhost-master --namespace=kubectl-915' Jan 26 21:15:00.375: INFO: stderr: "" Jan 26 21:15:00.375: INFO: stdout: "Paused\n" STEP: exposing RC Jan 26 21:15:00.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-915' Jan 26 21:15:00.740: INFO: stderr: "" Jan 26 21:15:00.740: INFO: stdout: "service/rm2 exposed\n" Jan 26 21:15:00.871: INFO: Service rm2 in namespace kubectl-915 found. STEP: exposing service Jan 26 21:15:02.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-915' Jan 26 21:15:03.170: INFO: stderr: "" Jan 26 21:15:03.170: INFO: stdout: "service/rm3 exposed\n" Jan 26 21:15:03.186: INFO: Service rm3 in namespace kubectl-915 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:15:05.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-915" for this suite. • [SLOW TEST:17.712 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":22,"skipped":417,"failed":0} [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:15:05.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-f58f85b9-e8f3-4e34-88fc-39d365f890fe STEP: Creating a pod to test consume configMaps Jan 26 21:15:05.522: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-03ace9e6-0156-4aa7-aced-1d112bf0b452" in namespace "projected-5465" to be "success or failure" Jan 26 21:15:05.646: INFO: Pod "pod-projected-configmaps-03ace9e6-0156-4aa7-aced-1d112bf0b452": Phase="Pending", Reason="", readiness=false. Elapsed: 123.873547ms Jan 26 21:15:07.667: INFO: Pod "pod-projected-configmaps-03ace9e6-0156-4aa7-aced-1d112bf0b452": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144542859s Jan 26 21:15:09.706: INFO: Pod "pod-projected-configmaps-03ace9e6-0156-4aa7-aced-1d112bf0b452": Phase="Pending", Reason="", readiness=false. Elapsed: 4.183912104s Jan 26 21:15:11.752: INFO: Pod "pod-projected-configmaps-03ace9e6-0156-4aa7-aced-1d112bf0b452": Phase="Pending", Reason="", readiness=false. Elapsed: 6.229487992s Jan 26 21:15:13.830: INFO: Pod "pod-projected-configmaps-03ace9e6-0156-4aa7-aced-1d112bf0b452": Phase="Pending", Reason="", readiness=false. Elapsed: 8.308285239s Jan 26 21:15:16.012: INFO: Pod "pod-projected-configmaps-03ace9e6-0156-4aa7-aced-1d112bf0b452": Phase="Pending", Reason="", readiness=false. Elapsed: 10.490002664s Jan 26 21:15:18.031: INFO: Pod "pod-projected-configmaps-03ace9e6-0156-4aa7-aced-1d112bf0b452": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.509053303s STEP: Saw pod success Jan 26 21:15:18.031: INFO: Pod "pod-projected-configmaps-03ace9e6-0156-4aa7-aced-1d112bf0b452" satisfied condition "success or failure" Jan 26 21:15:18.036: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-03ace9e6-0156-4aa7-aced-1d112bf0b452 container projected-configmap-volume-test: STEP: delete the pod Jan 26 21:15:18.121: INFO: Waiting for pod pod-projected-configmaps-03ace9e6-0156-4aa7-aced-1d112bf0b452 to disappear Jan 26 21:15:18.137: INFO: Pod pod-projected-configmaps-03ace9e6-0156-4aa7-aced-1d112bf0b452 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:15:18.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5465" for this suite. • [SLOW TEST:12.835 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":23,"skipped":417,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:15:18.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-fdbcd2c7-ab40-4127-b646-7d4f7644dda8 STEP: Creating a pod to test consume configMaps Jan 26 21:15:18.420: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-55c368d0-a29c-499e-93ec-e89321b39456" in namespace "projected-6373" to be "success or failure" Jan 26 21:15:18.578: INFO: Pod "pod-projected-configmaps-55c368d0-a29c-499e-93ec-e89321b39456": Phase="Pending", Reason="", readiness=false. Elapsed: 157.451066ms Jan 26 21:15:20.590: INFO: Pod "pod-projected-configmaps-55c368d0-a29c-499e-93ec-e89321b39456": Phase="Pending", Reason="", readiness=false. Elapsed: 2.170187741s Jan 26 21:15:22.648: INFO: Pod "pod-projected-configmaps-55c368d0-a29c-499e-93ec-e89321b39456": Phase="Pending", Reason="", readiness=false. Elapsed: 4.22784827s Jan 26 21:15:24.656: INFO: Pod "pod-projected-configmaps-55c368d0-a29c-499e-93ec-e89321b39456": Phase="Pending", Reason="", readiness=false. Elapsed: 6.235438731s Jan 26 21:15:26.667: INFO: Pod "pod-projected-configmaps-55c368d0-a29c-499e-93ec-e89321b39456": Phase="Pending", Reason="", readiness=false. Elapsed: 8.246390142s Jan 26 21:15:28.677: INFO: Pod "pod-projected-configmaps-55c368d0-a29c-499e-93ec-e89321b39456": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.256792686s STEP: Saw pod success Jan 26 21:15:28.677: INFO: Pod "pod-projected-configmaps-55c368d0-a29c-499e-93ec-e89321b39456" satisfied condition "success or failure" Jan 26 21:15:28.683: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-55c368d0-a29c-499e-93ec-e89321b39456 container projected-configmap-volume-test: STEP: delete the pod Jan 26 21:15:29.016: INFO: Waiting for pod pod-projected-configmaps-55c368d0-a29c-499e-93ec-e89321b39456 to disappear Jan 26 21:15:29.023: INFO: Pod pod-projected-configmaps-55c368d0-a29c-499e-93ec-e89321b39456 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:15:29.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6373" for this suite. • [SLOW TEST:10.885 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":24,"skipped":438,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:15:29.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:15:57.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4021" for this suite. • [SLOW TEST:28.148 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":25,"skipped":459,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:15:57.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 26 21:15:57.425: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2f9ed73e-509e-49d0-999d-3a92e4bd7ce6" in namespace "projected-5076" to be "success or failure" Jan 26 21:15:57.432: INFO: Pod "downwardapi-volume-2f9ed73e-509e-49d0-999d-3a92e4bd7ce6": Phase="Pending", Reason="", readiness=false. Elapsed: 7.072055ms Jan 26 21:15:59.440: INFO: Pod "downwardapi-volume-2f9ed73e-509e-49d0-999d-3a92e4bd7ce6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015114352s Jan 26 21:16:01.448: INFO: Pod "downwardapi-volume-2f9ed73e-509e-49d0-999d-3a92e4bd7ce6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022968427s Jan 26 21:16:03.455: INFO: Pod "downwardapi-volume-2f9ed73e-509e-49d0-999d-3a92e4bd7ce6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030308573s Jan 26 21:16:05.477: INFO: Pod "downwardapi-volume-2f9ed73e-509e-49d0-999d-3a92e4bd7ce6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.052028224s STEP: Saw pod success Jan 26 21:16:05.477: INFO: Pod "downwardapi-volume-2f9ed73e-509e-49d0-999d-3a92e4bd7ce6" satisfied condition "success or failure" Jan 26 21:16:05.481: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-2f9ed73e-509e-49d0-999d-3a92e4bd7ce6 container client-container: STEP: delete the pod Jan 26 21:16:05.537: INFO: Waiting for pod downwardapi-volume-2f9ed73e-509e-49d0-999d-3a92e4bd7ce6 to disappear Jan 26 21:16:05.547: INFO: Pod downwardapi-volume-2f9ed73e-509e-49d0-999d-3a92e4bd7ce6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:16:05.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5076" for this suite. • [SLOW TEST:8.358 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":26,"skipped":488,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:16:05.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 26 21:16:05.866: INFO: Create a RollingUpdate DaemonSet Jan 26 21:16:05.871: INFO: Check that daemon pods launch on every node of the cluster Jan 26 21:16:05.905: INFO: Number of nodes with available pods: 0 Jan 26 21:16:05.905: INFO: Node jerma-node is running more than one daemon pod Jan 26 21:16:07.408: INFO: Number of nodes with available pods: 0 Jan 26 21:16:07.408: INFO: Node jerma-node is running more than one daemon pod Jan 26 21:16:07.962: INFO: Number of nodes with available pods: 0 Jan 26 21:16:07.962: INFO: Node jerma-node is running more than one daemon pod Jan 26 21:16:08.928: INFO: Number of nodes with available pods: 0 Jan 26 21:16:08.928: INFO: Node jerma-node is running more than one daemon pod Jan 26 21:16:09.924: INFO: Number of nodes with available pods: 0 Jan 26 21:16:09.924: INFO: Node jerma-node is running more than one daemon pod Jan 26 21:16:12.412: INFO: Number of nodes with available pods: 0 Jan 26 21:16:12.412: INFO: Node jerma-node is running more than one daemon pod Jan 26 21:16:12.957: INFO: Number of nodes with available pods: 0 Jan 26 21:16:12.958: INFO: Node jerma-node is running more than one daemon pod Jan 26 21:16:13.993: INFO: Number of nodes with available pods: 0 Jan 26 21:16:13.993: INFO: Node jerma-node is running more than one daemon pod Jan 26 21:16:14.923: INFO: Number of nodes with available pods: 1 Jan 26 21:16:14.923: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 26 21:16:15.922: INFO: Number of nodes with available pods: 2 Jan 26 21:16:15.922: INFO: Number of running nodes: 2, number of available pods: 2 Jan 26 21:16:15.922: INFO: Update the DaemonSet to trigger a rollout Jan 26 21:16:15.933: INFO: Updating DaemonSet daemon-set Jan 26 21:16:33.965: INFO: Roll back the DaemonSet before rollout is complete Jan 26 21:16:33.971: INFO: Updating DaemonSet daemon-set Jan 26 21:16:33.971: INFO: Make sure DaemonSet rollback is complete Jan 26 21:16:33.985: INFO: Wrong image for pod: daemon-set-nhp6k. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 26 21:16:33.985: INFO: Pod daemon-set-nhp6k is not available Jan 26 21:16:35.697: INFO: Wrong image for pod: daemon-set-nhp6k. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 26 21:16:35.697: INFO: Pod daemon-set-nhp6k is not available Jan 26 21:16:36.122: INFO: Wrong image for pod: daemon-set-nhp6k. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 26 21:16:36.122: INFO: Pod daemon-set-nhp6k is not available Jan 26 21:16:37.080: INFO: Wrong image for pod: daemon-set-nhp6k. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 26 21:16:37.080: INFO: Pod daemon-set-nhp6k is not available Jan 26 21:16:38.303: INFO: Wrong image for pod: daemon-set-nhp6k. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 26 21:16:38.303: INFO: Pod daemon-set-nhp6k is not available Jan 26 21:16:39.084: INFO: Wrong image for pod: daemon-set-nhp6k. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 26 21:16:39.084: INFO: Pod daemon-set-nhp6k is not available Jan 26 21:16:40.082: INFO: Wrong image for pod: daemon-set-nhp6k. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 26 21:16:40.082: INFO: Pod daemon-set-nhp6k is not available Jan 26 21:16:41.081: INFO: Wrong image for pod: daemon-set-nhp6k. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 26 21:16:41.081: INFO: Pod daemon-set-nhp6k is not available Jan 26 21:16:42.083: INFO: Wrong image for pod: daemon-set-nhp6k. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 26 21:16:42.083: INFO: Pod daemon-set-nhp6k is not available Jan 26 21:16:43.092: INFO: Wrong image for pod: daemon-set-nhp6k. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 26 21:16:43.092: INFO: Pod daemon-set-nhp6k is not available Jan 26 21:16:44.082: INFO: Pod daemon-set-m8r6p is not available [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2842, will wait for the garbage collector to delete the pods Jan 26 21:16:44.151: INFO: Deleting DaemonSet.extensions daemon-set took: 7.325307ms Jan 26 21:16:44.451: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.497637ms Jan 26 21:16:52.356: INFO: Number of nodes with available pods: 0 Jan 26 21:16:52.356: INFO: Number of running nodes: 0, number of available pods: 0 Jan 26 21:16:52.359: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2842/daemonsets","resourceVersion":"4536338"},"items":null} Jan 26 21:16:52.361: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2842/pods","resourceVersion":"4536338"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:16:52.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2842" for this suite. • [SLOW TEST:46.802 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":27,"skipped":524,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:16:52.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-dd7b031b-b621-4483-ac07-61b611171d26 STEP: Creating a pod to test consume secrets Jan 26 21:16:52.556: INFO: Waiting up to 5m0s for pod "pod-secrets-9dfcd833-1bc8-48a4-ac86-51f2fb36d3d4" in namespace "secrets-2262" to be "success or failure" Jan 26 21:16:52.575: INFO: Pod "pod-secrets-9dfcd833-1bc8-48a4-ac86-51f2fb36d3d4": Phase="Pending", Reason="", readiness=false. Elapsed: 18.680958ms Jan 26 21:16:54.585: INFO: Pod "pod-secrets-9dfcd833-1bc8-48a4-ac86-51f2fb36d3d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028580261s Jan 26 21:16:56.594: INFO: Pod "pod-secrets-9dfcd833-1bc8-48a4-ac86-51f2fb36d3d4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037252882s Jan 26 21:16:58.604: INFO: Pod "pod-secrets-9dfcd833-1bc8-48a4-ac86-51f2fb36d3d4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047249876s Jan 26 21:17:00.615: INFO: Pod "pod-secrets-9dfcd833-1bc8-48a4-ac86-51f2fb36d3d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058359211s STEP: Saw pod success Jan 26 21:17:00.615: INFO: Pod "pod-secrets-9dfcd833-1bc8-48a4-ac86-51f2fb36d3d4" satisfied condition "success or failure" Jan 26 21:17:00.619: INFO: Trying to get logs from node jerma-node pod pod-secrets-9dfcd833-1bc8-48a4-ac86-51f2fb36d3d4 container secret-volume-test: STEP: delete the pod Jan 26 21:17:00.671: INFO: Waiting for pod pod-secrets-9dfcd833-1bc8-48a4-ac86-51f2fb36d3d4 to disappear Jan 26 21:17:00.676: INFO: Pod pod-secrets-9dfcd833-1bc8-48a4-ac86-51f2fb36d3d4 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:17:00.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2262" for this suite. • [SLOW TEST:8.380 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":28,"skipped":537,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:17:00.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2267.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-2267.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2267.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2267.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-2267.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2267.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 26 21:17:13.092: INFO: DNS probes using dns-2267/dns-test-09e47159-36f8-46da-a878-3eb35adc87a8 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:17:13.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2267" for this suite. • [SLOW TEST:12.507 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":29,"skipped":544,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:17:13.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Jan 26 21:17:13.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9991' Jan 26 21:17:13.655: INFO: stderr: "" Jan 26 21:17:13.655: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 26 21:17:13.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9991' Jan 26 21:17:13.977: INFO: stderr: "" Jan 26 21:17:13.977: INFO: stdout: "update-demo-nautilus-thk48 update-demo-nautilus-vbbs4 " Jan 26 21:17:13.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-thk48 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9991' Jan 26 21:17:15.502: INFO: stderr: "" Jan 26 21:17:15.502: INFO: stdout: "" Jan 26 21:17:15.502: INFO: update-demo-nautilus-thk48 is created but not running Jan 26 21:17:20.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9991' Jan 26 21:17:20.757: INFO: stderr: "" Jan 26 21:17:20.757: INFO: stdout: "update-demo-nautilus-thk48 update-demo-nautilus-vbbs4 " Jan 26 21:17:20.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-thk48 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9991' Jan 26 21:17:21.313: INFO: stderr: "" Jan 26 21:17:21.313: INFO: stdout: "" Jan 26 21:17:21.313: INFO: update-demo-nautilus-thk48 is created but not running Jan 26 21:17:26.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9991' Jan 26 21:17:26.496: INFO: stderr: "" Jan 26 21:17:26.496: INFO: stdout: "update-demo-nautilus-thk48 update-demo-nautilus-vbbs4 " Jan 26 21:17:26.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-thk48 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9991' Jan 26 21:17:26.602: INFO: stderr: "" Jan 26 21:17:26.602: INFO: stdout: "true" Jan 26 21:17:26.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-thk48 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9991' Jan 26 21:17:26.724: INFO: stderr: "" Jan 26 21:17:26.724: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 26 21:17:26.724: INFO: validating pod update-demo-nautilus-thk48 Jan 26 21:17:26.734: INFO: got data: { "image": "nautilus.jpg" } Jan 26 21:17:26.734: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 26 21:17:26.734: INFO: update-demo-nautilus-thk48 is verified up and running Jan 26 21:17:26.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vbbs4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9991' Jan 26 21:17:26.845: INFO: stderr: "" Jan 26 21:17:26.845: INFO: stdout: "true" Jan 26 21:17:26.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vbbs4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9991' Jan 26 21:17:26.997: INFO: stderr: "" Jan 26 21:17:26.997: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 26 21:17:26.997: INFO: validating pod update-demo-nautilus-vbbs4 Jan 26 21:17:27.003: INFO: got data: { "image": "nautilus.jpg" } Jan 26 21:17:27.003: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 26 21:17:27.003: INFO: update-demo-nautilus-vbbs4 is verified up and running STEP: using delete to clean up resources Jan 26 21:17:27.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9991' Jan 26 21:17:27.157: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 26 21:17:27.157: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 26 21:17:27.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9991' Jan 26 21:17:27.320: INFO: stderr: "No resources found in kubectl-9991 namespace.\n" Jan 26 21:17:27.320: INFO: stdout: "" Jan 26 21:17:27.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9991 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 26 21:17:27.454: INFO: stderr: "" Jan 26 21:17:27.454: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:17:27.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9991" for this suite. • [SLOW TEST:14.204 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":30,"skipped":557,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:17:27.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 26 21:17:31.965: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 26 21:17:33.996: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670251, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670251, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670252, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670251, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 26 21:17:36.004: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670251, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670251, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670252, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670251, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 26 21:17:38.001: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670251, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670251, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670252, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670251, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 26 21:17:41.043: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Jan 26 21:17:49.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-8135 to-be-attached-pod -i -c=container1' Jan 26 21:17:49.390: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:17:49.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8135" for this suite. STEP: Destroying namespace "webhook-8135-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:22.134 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":31,"skipped":565,"failed":0} SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:17:49.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 26 21:17:49.831: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jan 26 21:17:49.863: INFO: Number of nodes with available pods: 0 Jan 26 21:17:49.863: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jan 26 21:17:50.007: INFO: Number of nodes with available pods: 0 Jan 26 21:17:50.007: INFO: Node jerma-node is running more than one daemon pod Jan 26 21:17:51.015: INFO: Number of nodes with available pods: 0 Jan 26 21:17:51.015: INFO: Node jerma-node is running more than one daemon pod Jan 26 21:17:52.020: INFO: Number of nodes with available pods: 0 Jan 26 21:17:52.020: INFO: Node jerma-node is running more than one daemon pod Jan 26 21:17:53.014: INFO: Number of nodes with available pods: 0 Jan 26 21:17:53.015: INFO: Node jerma-node is running more than one daemon pod Jan 26 21:17:54.019: INFO: Number of nodes with available pods: 0 Jan 26 21:17:54.019: INFO: Node jerma-node is running more than one daemon pod Jan 26 21:17:55.028: INFO: Number of nodes with available pods: 0 Jan 26 21:17:55.028: INFO: Node jerma-node is running more than one daemon pod Jan 26 21:17:56.015: INFO: Number of nodes with available pods: 0 Jan 26 21:17:56.015: INFO: Node jerma-node is running more than one daemon pod Jan 26 21:17:57.015: INFO: Number of nodes with available pods: 0 Jan 26 21:17:57.015: INFO: Node jerma-node is running more than one daemon pod Jan 26 21:17:58.014: INFO: Number of nodes with available pods: 0 Jan 26 21:17:58.014: INFO: Node jerma-node is running more than one daemon pod Jan 26 21:17:59.048: INFO: Number of nodes with available pods: 0 Jan 26 21:17:59.048: INFO: Node jerma-node is running more than one daemon pod Jan 26 21:18:00.013: INFO: Number of nodes with available pods: 1 Jan 26 21:18:00.013: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jan 26 21:18:00.139: INFO: Number of nodes with available pods: 1 Jan 26 21:18:00.140: INFO: Number of running nodes: 0, number of available pods: 1 Jan 26 21:18:01.149: INFO: Number of nodes with available pods: 0 Jan 26 21:18:01.150: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jan 26 21:18:01.180: INFO: Number of nodes with available pods: 0 Jan 26 21:18:01.180: INFO: Node jerma-node is running more than one daemon pod Jan 26 21:18:02.222: INFO: Number of nodes with available pods: 0 Jan 26 21:18:02.222: INFO: Node jerma-node is running more than one daemon pod Jan 26 21:18:03.185: INFO: Number of nodes with available pods: 0 Jan 26 21:18:03.185: INFO: Node jerma-node is running more than one daemon pod Jan 26 21:18:04.187: INFO: Number of nodes with available pods: 0 Jan 26 21:18:04.187: INFO: Node jerma-node is running more than one daemon pod Jan 26 21:18:05.186: INFO: Number of nodes with available pods: 0 Jan 26 21:18:05.186: INFO: Node jerma-node is running more than one daemon pod Jan 26 21:18:06.189: INFO: Number of nodes with available pods: 0 Jan 26 21:18:06.189: INFO: Node jerma-node is running more than one daemon pod Jan 26 21:18:07.239: INFO: Number of nodes with available pods: 0 Jan 26 21:18:07.239: INFO: Node jerma-node is running more than one daemon pod Jan 26 21:18:08.187: INFO: Number of nodes with available pods: 0 Jan 26 21:18:08.187: INFO: Node jerma-node is running more than one daemon pod Jan 26 21:18:09.189: INFO: Number of nodes with available pods: 0 Jan 26 21:18:09.189: INFO: Node jerma-node is running more than one daemon pod Jan 26 21:18:10.187: INFO: Number of nodes with available pods: 0 Jan 26 21:18:10.187: INFO: Node jerma-node is running more than one daemon pod Jan 26 21:18:11.187: INFO: Number of nodes with available pods: 0 Jan 26 21:18:11.187: INFO: Node jerma-node is running more than one daemon pod Jan 26 21:18:12.186: INFO: Number of nodes with available pods: 0 Jan 26 21:18:12.186: INFO: Node jerma-node is running more than one daemon pod Jan 26 21:18:13.189: INFO: Number of nodes with available pods: 1 Jan 26 21:18:13.189: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5439, will wait for the garbage collector to delete the pods Jan 26 21:18:13.325: INFO: Deleting DaemonSet.extensions daemon-set took: 57.918387ms Jan 26 21:18:13.727: INFO: Terminating DaemonSet.extensions daemon-set pods took: 401.48649ms Jan 26 21:18:18.735: INFO: Number of nodes with available pods: 0 Jan 26 21:18:18.735: INFO: Number of running nodes: 0, number of available pods: 0 Jan 26 21:18:18.740: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5439/daemonsets","resourceVersion":"4536819"},"items":null} Jan 26 21:18:18.743: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5439/pods","resourceVersion":"4536819"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:18:18.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5439" for this suite. • [SLOW TEST:29.326 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":32,"skipped":572,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:18:18.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jan 26 21:18:19.015: INFO: Waiting up to 5m0s for pod "downward-api-7a58d676-0e81-4d9d-8010-a1f55c47a094" in namespace "downward-api-5069" to be "success or failure" Jan 26 21:18:19.024: INFO: Pod "downward-api-7a58d676-0e81-4d9d-8010-a1f55c47a094": Phase="Pending", Reason="", readiness=false. Elapsed: 8.639394ms Jan 26 21:18:21.031: INFO: Pod "downward-api-7a58d676-0e81-4d9d-8010-a1f55c47a094": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015187198s Jan 26 21:18:23.037: INFO: Pod "downward-api-7a58d676-0e81-4d9d-8010-a1f55c47a094": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021145058s Jan 26 21:18:25.043: INFO: Pod "downward-api-7a58d676-0e81-4d9d-8010-a1f55c47a094": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027529207s Jan 26 21:18:27.053: INFO: Pod "downward-api-7a58d676-0e81-4d9d-8010-a1f55c47a094": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.037080047s STEP: Saw pod success Jan 26 21:18:27.053: INFO: Pod "downward-api-7a58d676-0e81-4d9d-8010-a1f55c47a094" satisfied condition "success or failure" Jan 26 21:18:27.062: INFO: Trying to get logs from node jerma-node pod downward-api-7a58d676-0e81-4d9d-8010-a1f55c47a094 container dapi-container: STEP: delete the pod Jan 26 21:18:27.339: INFO: Waiting for pod downward-api-7a58d676-0e81-4d9d-8010-a1f55c47a094 to disappear Jan 26 21:18:27.357: INFO: Pod downward-api-7a58d676-0e81-4d9d-8010-a1f55c47a094 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:18:27.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5069" for this suite. • [SLOW TEST:8.433 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":33,"skipped":608,"failed":0} [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:18:27.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:18:44.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6444" for this suite. • [SLOW TEST:17.289 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":34,"skipped":608,"failed":0} SSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:18:44.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0126 21:19:15.474865 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 26 21:19:15.474: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:19:15.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7683" for this suite. • [SLOW TEST:30.826 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":35,"skipped":612,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:19:15.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jan 26 21:19:22.337: INFO: 1 pods remaining Jan 26 21:19:22.337: INFO: 0 pods has nil DeletionTimestamp Jan 26 21:19:22.337: INFO: STEP: Gathering metrics W0126 21:19:23.450787 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 26 21:19:23.450: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:19:23.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4543" for this suite. • [SLOW TEST:7.972 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":36,"skipped":626,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:19:23.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 26 21:19:26.384: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Jan 26 21:19:30.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2211 create -f -' Jan 26 21:19:34.271: INFO: stderr: "" Jan 26 21:19:34.271: INFO: stdout: "e2e-test-crd-publish-openapi-7080-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jan 26 21:19:34.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2211 delete e2e-test-crd-publish-openapi-7080-crds test-foo' Jan 26 21:19:35.076: INFO: stderr: "" Jan 26 21:19:35.076: INFO: stdout: "e2e-test-crd-publish-openapi-7080-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Jan 26 21:19:35.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2211 apply -f -' Jan 26 21:19:36.662: INFO: stderr: "" Jan 26 21:19:36.663: INFO: stdout: "e2e-test-crd-publish-openapi-7080-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jan 26 21:19:36.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2211 delete e2e-test-crd-publish-openapi-7080-crds test-foo' Jan 26 21:19:37.422: INFO: stderr: "" Jan 26 21:19:37.423: INFO: stdout: "e2e-test-crd-publish-openapi-7080-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Jan 26 21:19:37.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2211 create -f -' Jan 26 21:19:37.729: INFO: rc: 1 Jan 26 21:19:37.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2211 apply -f -' Jan 26 21:19:38.143: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Jan 26 21:19:38.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2211 create -f -' Jan 26 21:19:38.440: INFO: rc: 1 Jan 26 21:19:38.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2211 apply -f -' Jan 26 21:19:38.880: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Jan 26 21:19:38.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7080-crds' Jan 26 21:19:39.283: INFO: stderr: "" Jan 26 21:19:39.283: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7080-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Jan 26 21:19:39.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7080-crds.metadata' Jan 26 21:19:39.574: INFO: stderr: "" Jan 26 21:19:39.574: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7080-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Jan 26 21:19:39.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7080-crds.spec' Jan 26 21:19:40.083: INFO: stderr: "" Jan 26 21:19:40.083: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7080-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Jan 26 21:19:40.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7080-crds.spec.bars' Jan 26 21:19:40.376: INFO: stderr: "" Jan 26 21:19:40.377: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7080-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Jan 26 21:19:40.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7080-crds.spec.bars2' Jan 26 21:19:40.761: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:19:44.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2211" for this suite. • [SLOW TEST:21.289 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":37,"skipped":667,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:19:44.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 26 21:19:45.014: INFO: Number of nodes with available pods: 0 Jan 26 21:19:45.014: INFO: Node jerma-node is running more than one daemon pod Jan 26 21:19:46.024: INFO: Number of nodes with available pods: 0 Jan 26 21:19:46.024: INFO: Node jerma-node is running more than one daemon pod Jan 26 21:19:47.229: INFO: Number of nodes with available pods: 0 Jan 26 21:19:47.229: INFO: Node jerma-node is running more than one daemon pod Jan 26 21:19:48.214: INFO: Number of nodes with available pods: 0 Jan 26 21:19:48.214: INFO: Node jerma-node is running more than one daemon pod Jan 26 21:19:49.025: INFO: Number of nodes with available pods: 0 Jan 26 21:19:49.025: INFO: Node jerma-node is running more than one daemon pod Jan 26 21:19:50.023: INFO: Number of nodes with available pods: 0 Jan 26 21:19:50.023: INFO: Node jerma-node is running more than one daemon pod Jan 26 21:19:52.057: INFO: Number of nodes with available pods: 0 Jan 26 21:19:52.057: INFO: Node jerma-node is running more than one daemon pod Jan 26 21:19:53.192: INFO: Number of nodes with available pods: 1 Jan 26 21:19:53.192: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 26 21:19:54.030: INFO: Number of nodes with available pods: 1 Jan 26 21:19:54.030: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 26 21:19:55.028: INFO: Number of nodes with available pods: 2 Jan 26 21:19:55.028: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jan 26 21:19:55.073: INFO: Number of nodes with available pods: 1 Jan 26 21:19:55.074: INFO: Node jerma-node is running more than one daemon pod Jan 26 21:19:56.091: INFO: Number of nodes with available pods: 1 Jan 26 21:19:56.091: INFO: Node jerma-node is running more than one daemon pod Jan 26 21:19:57.083: INFO: Number of nodes with available pods: 1 Jan 26 21:19:57.084: INFO: Node jerma-node is running more than one daemon pod Jan 26 21:19:58.089: INFO: Number of nodes with available pods: 1 Jan 26 21:19:58.089: INFO: Node jerma-node is running more than one daemon pod Jan 26 21:19:59.086: INFO: Number of nodes with available pods: 1 Jan 26 21:19:59.086: INFO: Node jerma-node is running more than one daemon pod Jan 26 21:20:00.084: INFO: Number of nodes with available pods: 1 Jan 26 21:20:00.085: INFO: Node jerma-node is running more than one daemon pod Jan 26 21:20:01.099: INFO: Number of nodes with available pods: 1 Jan 26 21:20:01.099: INFO: Node jerma-node is running more than one daemon pod Jan 26 21:20:02.083: INFO: Number of nodes with available pods: 1 Jan 26 21:20:02.083: INFO: Node jerma-node is running more than one daemon pod Jan 26 21:20:03.086: INFO: Number of nodes with available pods: 2 Jan 26 21:20:03.086: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9427, will wait for the garbage collector to delete the pods Jan 26 21:20:03.157: INFO: Deleting DaemonSet.extensions daemon-set took: 7.367756ms Jan 26 21:20:03.458: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.68422ms Jan 26 21:20:13.193: INFO: Number of nodes with available pods: 0 Jan 26 21:20:13.193: INFO: Number of running nodes: 0, number of available pods: 0 Jan 26 21:20:13.200: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9427/daemonsets","resourceVersion":"4537390"},"items":null} Jan 26 21:20:13.204: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9427/pods","resourceVersion":"4537390"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:20:13.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9427" for this suite. • [SLOW TEST:28.479 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":38,"skipped":688,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:20:13.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 26 21:20:13.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Jan 26 21:20:13.418: INFO: stderr: "" Jan 26 21:20:13.418: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:10:40Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2019-12-07T21:12:17Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:20:13.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6080" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":39,"skipped":691,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:20:13.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 26 21:20:27.689: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 26 21:20:27.705: INFO: Pod pod-with-poststart-http-hook still exists Jan 26 21:20:29.705: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 26 21:20:29.714: INFO: Pod pod-with-poststart-http-hook still exists Jan 26 21:20:31.705: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 26 21:20:31.713: INFO: Pod pod-with-poststart-http-hook still exists Jan 26 21:20:33.705: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 26 21:20:33.716: INFO: Pod pod-with-poststart-http-hook still exists Jan 26 21:20:35.706: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 26 21:20:35.714: INFO: Pod pod-with-poststart-http-hook still exists Jan 26 21:20:37.705: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 26 21:20:37.712: INFO: Pod pod-with-poststart-http-hook still exists Jan 26 21:20:39.706: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 26 21:20:39.714: INFO: Pod pod-with-poststart-http-hook still exists Jan 26 21:20:41.705: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 26 21:20:41.714: INFO: Pod pod-with-poststart-http-hook still exists Jan 26 21:20:43.705: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 26 21:20:44.051: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:20:44.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9886" for this suite. • [SLOW TEST:30.628 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":40,"skipped":699,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:20:44.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-f4075eef-d61e-44fc-bfd7-309fcc51a8e9 STEP: Creating a pod to test consume secrets Jan 26 21:20:44.476: INFO: Waiting up to 5m0s for pod "pod-secrets-2d77da4d-4f4c-4b98-863a-f5dfc5f53e09" in namespace "secrets-9289" to be "success or failure" Jan 26 21:20:44.516: INFO: Pod "pod-secrets-2d77da4d-4f4c-4b98-863a-f5dfc5f53e09": Phase="Pending", Reason="", readiness=false. Elapsed: 40.436228ms Jan 26 21:20:46.527: INFO: Pod "pod-secrets-2d77da4d-4f4c-4b98-863a-f5dfc5f53e09": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050925718s Jan 26 21:20:48.541: INFO: Pod "pod-secrets-2d77da4d-4f4c-4b98-863a-f5dfc5f53e09": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064766912s Jan 26 21:20:50.555: INFO: Pod "pod-secrets-2d77da4d-4f4c-4b98-863a-f5dfc5f53e09": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079054432s Jan 26 21:20:52.572: INFO: Pod "pod-secrets-2d77da4d-4f4c-4b98-863a-f5dfc5f53e09": Phase="Pending", Reason="", readiness=false. Elapsed: 8.096131071s Jan 26 21:20:54.581: INFO: Pod "pod-secrets-2d77da4d-4f4c-4b98-863a-f5dfc5f53e09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.105378338s STEP: Saw pod success Jan 26 21:20:54.582: INFO: Pod "pod-secrets-2d77da4d-4f4c-4b98-863a-f5dfc5f53e09" satisfied condition "success or failure" Jan 26 21:20:54.588: INFO: Trying to get logs from node jerma-node pod pod-secrets-2d77da4d-4f4c-4b98-863a-f5dfc5f53e09 container secret-volume-test: STEP: delete the pod Jan 26 21:20:54.651: INFO: Waiting for pod pod-secrets-2d77da4d-4f4c-4b98-863a-f5dfc5f53e09 to disappear Jan 26 21:20:54.659: INFO: Pod pod-secrets-2d77da4d-4f4c-4b98-863a-f5dfc5f53e09 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:20:54.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9289" for this suite. STEP: Destroying namespace "secret-namespace-9598" for this suite. • [SLOW TEST:10.641 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":41,"skipped":708,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:20:54.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Jan 26 21:21:04.885: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-5243 PodName:pod-sharedvolume-c2b28509-2492-4be7-9317-a05344b782a3 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 26 21:21:04.885: INFO: >>> kubeConfig: /root/.kube/config I0126 21:21:04.990425 8 log.go:172] (0xc0061022c0) (0xc001b25400) Create stream I0126 21:21:04.990479 8 log.go:172] (0xc0061022c0) (0xc001b25400) Stream added, broadcasting: 1 I0126 21:21:04.995496 8 log.go:172] (0xc0061022c0) Reply frame received for 1 I0126 21:21:04.995555 8 log.go:172] (0xc0061022c0) (0xc001b254a0) Create stream I0126 21:21:04.995569 8 log.go:172] (0xc0061022c0) (0xc001b254a0) Stream added, broadcasting: 3 I0126 21:21:04.996760 8 log.go:172] (0xc0061022c0) Reply frame received for 3 I0126 21:21:04.996794 8 log.go:172] (0xc0061022c0) (0xc0020c0f00) Create stream I0126 21:21:04.996804 8 log.go:172] (0xc0061022c0) (0xc0020c0f00) Stream added, broadcasting: 5 I0126 21:21:04.998886 8 log.go:172] (0xc0061022c0) Reply frame received for 5 I0126 21:21:05.079371 8 log.go:172] (0xc0061022c0) Data frame received for 3 I0126 21:21:05.079431 8 log.go:172] (0xc001b254a0) (3) Data frame handling I0126 21:21:05.079458 8 log.go:172] (0xc001b254a0) (3) Data frame sent I0126 21:21:05.140595 8 log.go:172] (0xc0061022c0) Data frame received for 1 I0126 21:21:05.140693 8 log.go:172] (0xc0061022c0) (0xc0020c0f00) Stream removed, broadcasting: 5 I0126 21:21:05.141044 8 log.go:172] (0xc001b25400) (1) Data frame handling I0126 21:21:05.141159 8 log.go:172] (0xc001b25400) (1) Data frame sent I0126 21:21:05.141279 8 log.go:172] (0xc0061022c0) (0xc001b254a0) Stream removed, broadcasting: 3 I0126 21:21:05.141397 8 log.go:172] (0xc0061022c0) (0xc001b25400) Stream removed, broadcasting: 1 I0126 21:21:05.141446 8 log.go:172] (0xc0061022c0) Go away received I0126 21:21:05.141951 8 log.go:172] (0xc0061022c0) (0xc001b25400) Stream removed, broadcasting: 1 I0126 21:21:05.141984 8 log.go:172] (0xc0061022c0) (0xc001b254a0) Stream removed, broadcasting: 3 I0126 21:21:05.141997 8 log.go:172] (0xc0061022c0) (0xc0020c0f00) Stream removed, broadcasting: 5 Jan 26 21:21:05.142: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:21:05.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5243" for this suite. • [SLOW TEST:10.452 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":42,"skipped":725,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:21:05.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 26 21:21:05.237: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:21:05.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3662" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":43,"skipped":748,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:21:05.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 26 21:21:06.341: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-9e09a007-e059-45d7-9a86-af4abaacb46c" in namespace "security-context-test-2636" to be "success or failure" Jan 26 21:21:06.537: INFO: Pod "alpine-nnp-false-9e09a007-e059-45d7-9a86-af4abaacb46c": Phase="Pending", Reason="", readiness=false. Elapsed: 195.082397ms Jan 26 21:21:08.553: INFO: Pod "alpine-nnp-false-9e09a007-e059-45d7-9a86-af4abaacb46c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211250623s Jan 26 21:21:10.581: INFO: Pod "alpine-nnp-false-9e09a007-e059-45d7-9a86-af4abaacb46c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.239048644s Jan 26 21:21:12.600: INFO: Pod "alpine-nnp-false-9e09a007-e059-45d7-9a86-af4abaacb46c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.258127374s Jan 26 21:21:14.828: INFO: Pod "alpine-nnp-false-9e09a007-e059-45d7-9a86-af4abaacb46c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.486072153s Jan 26 21:21:14.828: INFO: Pod "alpine-nnp-false-9e09a007-e059-45d7-9a86-af4abaacb46c" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:21:14.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2636" for this suite. • [SLOW TEST:8.981 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when creating containers with AllowPrivilegeEscalation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:289 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":44,"skipped":799,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:21:14.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 26 21:21:15.028: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Jan 26 21:21:17.601: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:21:17.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4782" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":45,"skipped":827,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:21:17.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:21:31.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3580" for this suite. • [SLOW TEST:13.471 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":46,"skipped":843,"failed":0} [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:21:31.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 26 21:21:31.303: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9f16f716-c4e5-4d3b-958c-9eced02f85ed" in namespace "projected-2940" to be "success or failure" Jan 26 21:21:31.308: INFO: Pod "downwardapi-volume-9f16f716-c4e5-4d3b-958c-9eced02f85ed": Phase="Pending", Reason="", readiness=false. Elapsed: 5.339109ms Jan 26 21:21:33.320: INFO: Pod "downwardapi-volume-9f16f716-c4e5-4d3b-958c-9eced02f85ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017179792s Jan 26 21:21:35.327: INFO: Pod "downwardapi-volume-9f16f716-c4e5-4d3b-958c-9eced02f85ed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024278687s Jan 26 21:21:37.409: INFO: Pod "downwardapi-volume-9f16f716-c4e5-4d3b-958c-9eced02f85ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.105990332s STEP: Saw pod success Jan 26 21:21:37.409: INFO: Pod "downwardapi-volume-9f16f716-c4e5-4d3b-958c-9eced02f85ed" satisfied condition "success or failure" Jan 26 21:21:37.412: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-9f16f716-c4e5-4d3b-958c-9eced02f85ed container client-container: STEP: delete the pod Jan 26 21:21:37.447: INFO: Waiting for pod downwardapi-volume-9f16f716-c4e5-4d3b-958c-9eced02f85ed to disappear Jan 26 21:21:37.486: INFO: Pod downwardapi-volume-9f16f716-c4e5-4d3b-958c-9eced02f85ed no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:21:37.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2940" for this suite. • [SLOW TEST:6.341 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":843,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:21:37.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-8fde3cfd-f606-4b28-a5ce-beb11e163cf8 STEP: Creating configMap with name cm-test-opt-upd-65a28cf5-c69f-4439-a724-dca80341daa0 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-8fde3cfd-f606-4b28-a5ce-beb11e163cf8 STEP: Updating configmap cm-test-opt-upd-65a28cf5-c69f-4439-a724-dca80341daa0 STEP: Creating configMap with name cm-test-opt-create-1f9fd5c6-1bb0-4a65-9ae4-f335a75f4d8e STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:21:49.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1975" for this suite. • [SLOW TEST:12.469 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":48,"skipped":857,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:21:49.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-fn9p STEP: Creating a pod to test atomic-volume-subpath Jan 26 21:21:50.220: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-fn9p" in namespace "subpath-3810" to be "success or failure" Jan 26 21:21:50.238: INFO: Pod "pod-subpath-test-projected-fn9p": Phase="Pending", Reason="", readiness=false. Elapsed: 18.327112ms Jan 26 21:21:52.480: INFO: Pod "pod-subpath-test-projected-fn9p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.260015969s Jan 26 21:21:54.785: INFO: Pod "pod-subpath-test-projected-fn9p": Phase="Pending", Reason="", readiness=false. Elapsed: 4.564667901s Jan 26 21:21:56.823: INFO: Pod "pod-subpath-test-projected-fn9p": Phase="Pending", Reason="", readiness=false. Elapsed: 6.602745171s Jan 26 21:21:58.829: INFO: Pod "pod-subpath-test-projected-fn9p": Phase="Running", Reason="", readiness=true. Elapsed: 8.609195857s Jan 26 21:22:00.835: INFO: Pod "pod-subpath-test-projected-fn9p": Phase="Running", Reason="", readiness=true. Elapsed: 10.614427758s Jan 26 21:22:02.844: INFO: Pod "pod-subpath-test-projected-fn9p": Phase="Running", Reason="", readiness=true. Elapsed: 12.623699459s Jan 26 21:22:04.866: INFO: Pod "pod-subpath-test-projected-fn9p": Phase="Running", Reason="", readiness=true. Elapsed: 14.646033673s Jan 26 21:22:06.874: INFO: Pod "pod-subpath-test-projected-fn9p": Phase="Running", Reason="", readiness=true. Elapsed: 16.653640473s Jan 26 21:22:08.954: INFO: Pod "pod-subpath-test-projected-fn9p": Phase="Running", Reason="", readiness=true. Elapsed: 18.734054494s Jan 26 21:22:10.963: INFO: Pod "pod-subpath-test-projected-fn9p": Phase="Running", Reason="", readiness=true. Elapsed: 20.743279962s Jan 26 21:22:12.973: INFO: Pod "pod-subpath-test-projected-fn9p": Phase="Running", Reason="", readiness=true. Elapsed: 22.75296216s Jan 26 21:22:14.981: INFO: Pod "pod-subpath-test-projected-fn9p": Phase="Running", Reason="", readiness=true. Elapsed: 24.760938821s Jan 26 21:22:16.988: INFO: Pod "pod-subpath-test-projected-fn9p": Phase="Running", Reason="", readiness=true. Elapsed: 26.768248074s Jan 26 21:22:19.254: INFO: Pod "pod-subpath-test-projected-fn9p": Phase="Running", Reason="", readiness=true. Elapsed: 29.033703285s Jan 26 21:22:21.411: INFO: Pod "pod-subpath-test-projected-fn9p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.190834572s STEP: Saw pod success Jan 26 21:22:21.411: INFO: Pod "pod-subpath-test-projected-fn9p" satisfied condition "success or failure" Jan 26 21:22:21.417: INFO: Trying to get logs from node jerma-server-mvvl6gufaqub pod pod-subpath-test-projected-fn9p container test-container-subpath-projected-fn9p: STEP: delete the pod Jan 26 21:22:22.058: INFO: Waiting for pod pod-subpath-test-projected-fn9p to disappear Jan 26 21:22:22.079: INFO: Pod pod-subpath-test-projected-fn9p no longer exists STEP: Deleting pod pod-subpath-test-projected-fn9p Jan 26 21:22:22.080: INFO: Deleting pod "pod-subpath-test-projected-fn9p" in namespace "subpath-3810" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:22:22.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3810" for this suite. • [SLOW TEST:32.128 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":49,"skipped":866,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:22:22.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin Jan 26 21:22:22.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9356 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Jan 26 21:22:30.340: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0126 21:22:29.128013 1419 log.go:172] (0xc0009f28f0) (0xc000a70140) Create stream\nI0126 21:22:29.128384 1419 log.go:172] (0xc0009f28f0) (0xc000a70140) Stream added, broadcasting: 1\nI0126 21:22:29.133332 1419 log.go:172] (0xc0009f28f0) Reply frame received for 1\nI0126 21:22:29.133361 1419 log.go:172] (0xc0009f28f0) (0xc0009241e0) Create stream\nI0126 21:22:29.133371 1419 log.go:172] (0xc0009f28f0) (0xc0009241e0) Stream added, broadcasting: 3\nI0126 21:22:29.134582 1419 log.go:172] (0xc0009f28f0) Reply frame received for 3\nI0126 21:22:29.134631 1419 log.go:172] (0xc0009f28f0) (0xc0006a19a0) Create stream\nI0126 21:22:29.134666 1419 log.go:172] (0xc0009f28f0) (0xc0006a19a0) Stream added, broadcasting: 5\nI0126 21:22:29.135979 1419 log.go:172] (0xc0009f28f0) Reply frame received for 5\nI0126 21:22:29.136010 1419 log.go:172] (0xc0009f28f0) (0xc000a70280) Create stream\nI0126 21:22:29.136028 1419 log.go:172] (0xc0009f28f0) (0xc000a70280) Stream added, broadcasting: 7\nI0126 21:22:29.139604 1419 log.go:172] (0xc0009f28f0) Reply frame received for 7\nI0126 21:22:29.140076 1419 log.go:172] (0xc0009241e0) (3) Writing data frame\nI0126 21:22:29.140233 1419 log.go:172] (0xc0009241e0) (3) Writing data frame\nI0126 21:22:29.144882 1419 log.go:172] (0xc0009f28f0) Data frame received for 5\nI0126 21:22:29.144911 1419 log.go:172] (0xc0006a19a0) (5) Data frame handling\nI0126 21:22:29.144941 1419 log.go:172] (0xc0006a19a0) (5) Data frame sent\nI0126 21:22:29.147738 1419 log.go:172] (0xc0009f28f0) Data frame received for 5\nI0126 21:22:29.147768 1419 log.go:172] (0xc0006a19a0) (5) Data frame handling\nI0126 21:22:29.147784 1419 log.go:172] (0xc0006a19a0) (5) Data frame sent\nI0126 21:22:30.201716 1419 log.go:172] (0xc0009f28f0) Data frame received for 1\nI0126 21:22:30.201788 1419 log.go:172] (0xc000a70140) (1) Data frame handling\nI0126 21:22:30.201867 1419 log.go:172] (0xc000a70140) (1) Data frame sent\nI0126 21:22:30.202037 1419 log.go:172] (0xc0009f28f0) (0xc000a70140) Stream removed, broadcasting: 1\nI0126 21:22:30.202164 1419 log.go:172] (0xc0009f28f0) (0xc0009241e0) Stream removed, broadcasting: 3\nI0126 21:22:30.202662 1419 log.go:172] (0xc0009f28f0) (0xc0006a19a0) Stream removed, broadcasting: 5\nI0126 21:22:30.202950 1419 log.go:172] (0xc0009f28f0) (0xc000a70280) Stream removed, broadcasting: 7\nI0126 21:22:30.202986 1419 log.go:172] (0xc0009f28f0) Go away received\nI0126 21:22:30.203320 1419 log.go:172] (0xc0009f28f0) (0xc000a70140) Stream removed, broadcasting: 1\nI0126 21:22:30.203336 1419 log.go:172] (0xc0009f28f0) (0xc0009241e0) Stream removed, broadcasting: 3\nI0126 21:22:30.203343 1419 log.go:172] (0xc0009f28f0) (0xc0006a19a0) Stream removed, broadcasting: 5\nI0126 21:22:30.203349 1419 log.go:172] (0xc0009f28f0) (0xc000a70280) Stream removed, broadcasting: 7\n" Jan 26 21:22:30.340: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:22:32.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9356" for this suite. • [SLOW TEST:10.267 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1924 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":50,"skipped":873,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:22:32.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:23:15.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9174" for this suite. • [SLOW TEST:43.187 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":51,"skipped":897,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:23:15.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-da2f583e-cda1-4785-8b59-f0fec55d484d STEP: Creating a pod to test consume secrets Jan 26 21:23:15.767: INFO: Waiting up to 5m0s for pod "pod-secrets-ea848cec-9009-4b53-9f66-86d7e9fdc17c" in namespace "secrets-8573" to be "success or failure" Jan 26 21:23:15.797: INFO: Pod "pod-secrets-ea848cec-9009-4b53-9f66-86d7e9fdc17c": Phase="Pending", Reason="", readiness=false. Elapsed: 29.332745ms Jan 26 21:23:17.823: INFO: Pod "pod-secrets-ea848cec-9009-4b53-9f66-86d7e9fdc17c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055638243s Jan 26 21:23:19.833: INFO: Pod "pod-secrets-ea848cec-9009-4b53-9f66-86d7e9fdc17c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065528635s Jan 26 21:23:21.864: INFO: Pod "pod-secrets-ea848cec-9009-4b53-9f66-86d7e9fdc17c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.09693969s Jan 26 21:23:24.266: INFO: Pod "pod-secrets-ea848cec-9009-4b53-9f66-86d7e9fdc17c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.498698215s STEP: Saw pod success Jan 26 21:23:24.266: INFO: Pod "pod-secrets-ea848cec-9009-4b53-9f66-86d7e9fdc17c" satisfied condition "success or failure" Jan 26 21:23:24.271: INFO: Trying to get logs from node jerma-node pod pod-secrets-ea848cec-9009-4b53-9f66-86d7e9fdc17c container secret-volume-test: STEP: delete the pod Jan 26 21:23:24.703: INFO: Waiting for pod pod-secrets-ea848cec-9009-4b53-9f66-86d7e9fdc17c to disappear Jan 26 21:23:24.712: INFO: Pod pod-secrets-ea848cec-9009-4b53-9f66-86d7e9fdc17c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:23:24.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8573" for this suite. • [SLOW TEST:9.200 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":52,"skipped":945,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:23:24.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info Jan 26 21:23:24.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Jan 26 21:23:24.999: INFO: stderr: "" Jan 26 21:23:24.999: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:23:24.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9577" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":53,"skipped":955,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:23:25.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Jan 26 21:23:25.082: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:23:35.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9259" for this suite. • [SLOW TEST:10.271 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":54,"skipped":1030,"failed":0} [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:23:35.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:23:52.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3121" for this suite. • [SLOW TEST:17.303 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":55,"skipped":1030,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:23:52.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1768 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 26 21:23:52.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-8581' Jan 26 21:23:52.960: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 26 21:23:52.960: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1773 Jan 26 21:23:52.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-8581' Jan 26 21:23:53.214: INFO: stderr: "" Jan 26 21:23:53.215: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:23:53.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8581" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":56,"skipped":1054,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:23:53.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 26 21:23:53.336: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4163293f-b9c8-472b-a253-02c8eca7565c" in namespace "downward-api-8109" to be "success or failure" Jan 26 21:23:53.347: INFO: Pod "downwardapi-volume-4163293f-b9c8-472b-a253-02c8eca7565c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.258482ms Jan 26 21:23:55.357: INFO: Pod "downwardapi-volume-4163293f-b9c8-472b-a253-02c8eca7565c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020868709s Jan 26 21:23:57.372: INFO: Pod "downwardapi-volume-4163293f-b9c8-472b-a253-02c8eca7565c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036220164s Jan 26 21:23:59.380: INFO: Pod "downwardapi-volume-4163293f-b9c8-472b-a253-02c8eca7565c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044080409s Jan 26 21:24:01.388: INFO: Pod "downwardapi-volume-4163293f-b9c8-472b-a253-02c8eca7565c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05179638s Jan 26 21:24:03.395: INFO: Pod "downwardapi-volume-4163293f-b9c8-472b-a253-02c8eca7565c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.059103474s STEP: Saw pod success Jan 26 21:24:03.395: INFO: Pod "downwardapi-volume-4163293f-b9c8-472b-a253-02c8eca7565c" satisfied condition "success or failure" Jan 26 21:24:03.400: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-4163293f-b9c8-472b-a253-02c8eca7565c container client-container: STEP: delete the pod Jan 26 21:24:03.556: INFO: Waiting for pod downwardapi-volume-4163293f-b9c8-472b-a253-02c8eca7565c to disappear Jan 26 21:24:03.564: INFO: Pod downwardapi-volume-4163293f-b9c8-472b-a253-02c8eca7565c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:24:03.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8109" for this suite. • [SLOW TEST:10.358 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":57,"skipped":1107,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:24:03.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 26 21:24:04.154: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 26 21:24:06.175: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670644, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670644, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670644, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670644, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 26 21:24:08.181: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670644, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670644, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670644, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670644, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 26 21:24:10.182: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670644, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670644, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670644, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670644, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 26 21:24:13.242: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:24:13.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7554" for this suite. STEP: Destroying namespace "webhook-7554-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.062 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":58,"skipped":1114,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:24:13.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-08366129-6c1c-47eb-97af-7592272881fa STEP: Creating a pod to test consume configMaps Jan 26 21:24:13.755: INFO: Waiting up to 5m0s for pod "pod-configmaps-35074ac2-2838-4893-9672-dff322d4017f" in namespace "configmap-4028" to be "success or failure" Jan 26 21:24:13.831: INFO: Pod "pod-configmaps-35074ac2-2838-4893-9672-dff322d4017f": Phase="Pending", Reason="", readiness=false. Elapsed: 75.984838ms Jan 26 21:24:15.844: INFO: Pod "pod-configmaps-35074ac2-2838-4893-9672-dff322d4017f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089097938s Jan 26 21:24:17.868: INFO: Pod "pod-configmaps-35074ac2-2838-4893-9672-dff322d4017f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112533989s Jan 26 21:24:19.916: INFO: Pod "pod-configmaps-35074ac2-2838-4893-9672-dff322d4017f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.160686432s Jan 26 21:24:21.922: INFO: Pod "pod-configmaps-35074ac2-2838-4893-9672-dff322d4017f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.166466261s STEP: Saw pod success Jan 26 21:24:21.922: INFO: Pod "pod-configmaps-35074ac2-2838-4893-9672-dff322d4017f" satisfied condition "success or failure" Jan 26 21:24:21.924: INFO: Trying to get logs from node jerma-node pod pod-configmaps-35074ac2-2838-4893-9672-dff322d4017f container configmap-volume-test: STEP: delete the pod Jan 26 21:24:21.954: INFO: Waiting for pod pod-configmaps-35074ac2-2838-4893-9672-dff322d4017f to disappear Jan 26 21:24:21.959: INFO: Pod pod-configmaps-35074ac2-2838-4893-9672-dff322d4017f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:24:21.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4028" for this suite. • [SLOW TEST:8.315 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":59,"skipped":1131,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:24:21.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 26 21:24:23.170: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 26 21:24:25.192: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670663, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670663, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670663, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670663, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 26 21:24:27.224: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670663, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670663, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670663, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670663, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 26 21:24:29.208: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670663, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670663, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670663, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670663, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 26 21:24:32.583: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:24:33.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3892" for this suite. STEP: Destroying namespace "webhook-3892-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.712 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":60,"skipped":1136,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:24:33.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 26 21:24:33.736: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 26 21:24:36.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1303 create -f -' Jan 26 21:24:39.586: INFO: stderr: "" Jan 26 21:24:39.586: INFO: stdout: "e2e-test-crd-publish-openapi-3904-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jan 26 21:24:39.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1303 delete e2e-test-crd-publish-openapi-3904-crds test-cr' Jan 26 21:24:39.706: INFO: stderr: "" Jan 26 21:24:39.706: INFO: stdout: "e2e-test-crd-publish-openapi-3904-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Jan 26 21:24:39.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1303 apply -f -' Jan 26 21:24:40.148: INFO: stderr: "" Jan 26 21:24:40.148: INFO: stdout: "e2e-test-crd-publish-openapi-3904-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jan 26 21:24:40.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1303 delete e2e-test-crd-publish-openapi-3904-crds test-cr' Jan 26 21:24:40.286: INFO: stderr: "" Jan 26 21:24:40.286: INFO: stdout: "e2e-test-crd-publish-openapi-3904-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jan 26 21:24:40.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3904-crds' Jan 26 21:24:40.574: INFO: stderr: "" Jan 26 21:24:40.574: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3904-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:24:44.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1303" for this suite. • [SLOW TEST:10.633 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":61,"skipped":1148,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:24:44.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-796629fc-5f00-4a21-8946-d75f74519329 STEP: Creating a pod to test consume configMaps Jan 26 21:24:44.418: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a8468ea8-d228-4e4f-b838-cc94ce20e09b" in namespace "projected-7417" to be "success or failure" Jan 26 21:24:44.427: INFO: Pod "pod-projected-configmaps-a8468ea8-d228-4e4f-b838-cc94ce20e09b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.561484ms Jan 26 21:24:46.434: INFO: Pod "pod-projected-configmaps-a8468ea8-d228-4e4f-b838-cc94ce20e09b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01593256s Jan 26 21:24:48.442: INFO: Pod "pod-projected-configmaps-a8468ea8-d228-4e4f-b838-cc94ce20e09b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023876202s Jan 26 21:24:50.454: INFO: Pod "pod-projected-configmaps-a8468ea8-d228-4e4f-b838-cc94ce20e09b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036482964s Jan 26 21:24:52.464: INFO: Pod "pod-projected-configmaps-a8468ea8-d228-4e4f-b838-cc94ce20e09b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.045854282s STEP: Saw pod success Jan 26 21:24:52.464: INFO: Pod "pod-projected-configmaps-a8468ea8-d228-4e4f-b838-cc94ce20e09b" satisfied condition "success or failure" Jan 26 21:24:52.510: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-a8468ea8-d228-4e4f-b838-cc94ce20e09b container projected-configmap-volume-test: STEP: delete the pod Jan 26 21:24:52.568: INFO: Waiting for pod pod-projected-configmaps-a8468ea8-d228-4e4f-b838-cc94ce20e09b to disappear Jan 26 21:24:52.575: INFO: Pod pod-projected-configmaps-a8468ea8-d228-4e4f-b838-cc94ce20e09b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:24:52.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7417" for this suite. • [SLOW TEST:8.272 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":1157,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:24:52.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-b809c4b0-aeb1-4019-b7a3-cc1465cdad86 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:25:02.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2319" for this suite. • [SLOW TEST:10.272 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":63,"skipped":1182,"failed":0} SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:25:02.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:25:11.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6918" for this suite. • [SLOW TEST:8.249 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":64,"skipped":1185,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:25:11.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-d551f104-52dd-4be6-873b-71a2c26453dd STEP: Creating a pod to test consume configMaps Jan 26 21:25:11.215: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d6d8fd2a-bbf6-423e-a250-3daa643d53a6" in namespace "projected-1638" to be "success or failure" Jan 26 21:25:11.229: INFO: Pod "pod-projected-configmaps-d6d8fd2a-bbf6-423e-a250-3daa643d53a6": Phase="Pending", Reason="", readiness=false. Elapsed: 13.969367ms Jan 26 21:25:13.245: INFO: Pod "pod-projected-configmaps-d6d8fd2a-bbf6-423e-a250-3daa643d53a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030145153s Jan 26 21:25:15.262: INFO: Pod "pod-projected-configmaps-d6d8fd2a-bbf6-423e-a250-3daa643d53a6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047106725s Jan 26 21:25:17.355: INFO: Pod "pod-projected-configmaps-d6d8fd2a-bbf6-423e-a250-3daa643d53a6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.1402279s Jan 26 21:25:19.368: INFO: Pod "pod-projected-configmaps-d6d8fd2a-bbf6-423e-a250-3daa643d53a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.153517762s STEP: Saw pod success Jan 26 21:25:19.368: INFO: Pod "pod-projected-configmaps-d6d8fd2a-bbf6-423e-a250-3daa643d53a6" satisfied condition "success or failure" Jan 26 21:25:19.380: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-d6d8fd2a-bbf6-423e-a250-3daa643d53a6 container projected-configmap-volume-test: STEP: delete the pod Jan 26 21:25:19.516: INFO: Waiting for pod pod-projected-configmaps-d6d8fd2a-bbf6-423e-a250-3daa643d53a6 to disappear Jan 26 21:25:19.523: INFO: Pod pod-projected-configmaps-d6d8fd2a-bbf6-423e-a250-3daa643d53a6 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:25:19.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1638" for this suite. • [SLOW TEST:8.432 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":65,"skipped":1194,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:25:19.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Jan 26 21:25:27.714: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Jan 26 21:25:32.968: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:25:32.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4442" for this suite. • [SLOW TEST:13.439 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":66,"skipped":1225,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:25:32.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Jan 26 21:25:33.059: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:25:53.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8065" for this suite. • [SLOW TEST:20.324 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":67,"skipped":1232,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:25:53.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-4a22ebfa-59f8-4964-901d-807a90adb71b [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:25:53.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9226" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":68,"skipped":1242,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:25:53.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 26 21:25:54.013: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 26 21:25:56.040: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670754, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670754, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670754, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670753, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 26 21:25:58.050: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670754, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670754, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670754, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670753, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 26 21:26:00.058: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670754, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670754, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670754, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670753, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 26 21:26:03.142: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:26:03.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1751" for this suite. STEP: Destroying namespace "webhook-1751-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.174 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":69,"skipped":1248,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:26:03.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Jan 26 21:26:12.372: INFO: Successfully updated pod "labelsupdateab7cac6c-0c46-42c9-88ad-d044fdb95dc0" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:26:14.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-364" for this suite. • [SLOW TEST:10.803 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":70,"skipped":1257,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:26:14.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 26 21:26:14.492: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9531fa16-ff51-401c-acfd-3ce851a89031" in namespace "projected-4832" to be "success or failure" Jan 26 21:26:14.502: INFO: Pod "downwardapi-volume-9531fa16-ff51-401c-acfd-3ce851a89031": Phase="Pending", Reason="", readiness=false. Elapsed: 10.388729ms Jan 26 21:26:16.515: INFO: Pod "downwardapi-volume-9531fa16-ff51-401c-acfd-3ce851a89031": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023363406s Jan 26 21:26:18.527: INFO: Pod "downwardapi-volume-9531fa16-ff51-401c-acfd-3ce851a89031": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035409871s Jan 26 21:26:20.541: INFO: Pod "downwardapi-volume-9531fa16-ff51-401c-acfd-3ce851a89031": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04895869s Jan 26 21:26:22.550: INFO: Pod "downwardapi-volume-9531fa16-ff51-401c-acfd-3ce851a89031": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058240575s STEP: Saw pod success Jan 26 21:26:22.550: INFO: Pod "downwardapi-volume-9531fa16-ff51-401c-acfd-3ce851a89031" satisfied condition "success or failure" Jan 26 21:26:22.555: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-9531fa16-ff51-401c-acfd-3ce851a89031 container client-container: STEP: delete the pod Jan 26 21:26:22.641: INFO: Waiting for pod downwardapi-volume-9531fa16-ff51-401c-acfd-3ce851a89031 to disappear Jan 26 21:26:22.653: INFO: Pod downwardapi-volume-9531fa16-ff51-401c-acfd-3ce851a89031 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:26:22.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4832" for this suite. • [SLOW TEST:8.239 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":71,"skipped":1262,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:26:22.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1713 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 26 21:26:22.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-7592' Jan 26 21:26:23.094: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 26 21:26:23.094: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1718 Jan 26 21:26:25.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-7592' Jan 26 21:26:25.555: INFO: stderr: "" Jan 26 21:26:25.555: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:26:25.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7592" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":72,"skipped":1269,"failed":0} ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:26:25.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jan 26 21:26:26.078: INFO: Waiting up to 5m0s for pod "downward-api-a6a51a97-1392-444a-9f18-dff71d52af17" in namespace "downward-api-3598" to be "success or failure" Jan 26 21:26:26.114: INFO: Pod "downward-api-a6a51a97-1392-444a-9f18-dff71d52af17": Phase="Pending", Reason="", readiness=false. Elapsed: 35.913541ms Jan 26 21:26:28.127: INFO: Pod "downward-api-a6a51a97-1392-444a-9f18-dff71d52af17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049646447s Jan 26 21:26:30.135: INFO: Pod "downward-api-a6a51a97-1392-444a-9f18-dff71d52af17": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057776469s Jan 26 21:26:32.146: INFO: Pod "downward-api-a6a51a97-1392-444a-9f18-dff71d52af17": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068642183s Jan 26 21:26:34.155: INFO: Pod "downward-api-a6a51a97-1392-444a-9f18-dff71d52af17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.077621095s STEP: Saw pod success Jan 26 21:26:34.155: INFO: Pod "downward-api-a6a51a97-1392-444a-9f18-dff71d52af17" satisfied condition "success or failure" Jan 26 21:26:34.159: INFO: Trying to get logs from node jerma-node pod downward-api-a6a51a97-1392-444a-9f18-dff71d52af17 container dapi-container: STEP: delete the pod Jan 26 21:26:34.323: INFO: Waiting for pod downward-api-a6a51a97-1392-444a-9f18-dff71d52af17 to disappear Jan 26 21:26:34.330: INFO: Pod downward-api-a6a51a97-1392-444a-9f18-dff71d52af17 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:26:34.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3598" for this suite. • [SLOW TEST:8.771 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":73,"skipped":1269,"failed":0} [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:26:34.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token Jan 26 21:26:35.084: INFO: created pod pod-service-account-defaultsa Jan 26 21:26:35.084: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jan 26 21:26:35.098: INFO: created pod pod-service-account-mountsa Jan 26 21:26:35.098: INFO: pod pod-service-account-mountsa service account token volume mount: true Jan 26 21:26:35.194: INFO: created pod pod-service-account-nomountsa Jan 26 21:26:35.194: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jan 26 21:26:35.251: INFO: created pod pod-service-account-defaultsa-mountspec Jan 26 21:26:35.252: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jan 26 21:26:35.261: INFO: created pod pod-service-account-mountsa-mountspec Jan 26 21:26:35.261: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jan 26 21:26:35.299: INFO: created pod pod-service-account-nomountsa-mountspec Jan 26 21:26:35.299: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jan 26 21:26:35.401: INFO: created pod pod-service-account-defaultsa-nomountspec Jan 26 21:26:35.401: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jan 26 21:26:35.488: INFO: created pod pod-service-account-mountsa-nomountspec Jan 26 21:26:35.488: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jan 26 21:26:35.628: INFO: created pod pod-service-account-nomountsa-nomountspec Jan 26 21:26:35.628: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:26:35.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2837" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":74,"skipped":1269,"failed":0} S ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:26:37.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:26:39.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3315" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":75,"skipped":1270,"failed":0} SSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:26:39.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7179.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7179.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7179.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7179.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7179.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7179.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7179.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7179.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7179.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7179.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7179.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 43.93.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.93.43_udp@PTR;check="$$(dig +tcp +noall +answer +search 43.93.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.93.43_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7179.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7179.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7179.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7179.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7179.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7179.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7179.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7179.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7179.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7179.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7179.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 43.93.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.93.43_udp@PTR;check="$$(dig +tcp +noall +answer +search 43.93.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.93.43_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 26 21:27:14.577: INFO: Unable to read wheezy_udp@dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:14.587: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:14.598: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:14.603: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:14.629: INFO: Unable to read jessie_udp@dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:14.633: INFO: Unable to read jessie_tcp@dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:14.637: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:14.643: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:14.672: INFO: Lookups using dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1 failed for: [wheezy_udp@dns-test-service.dns-7179.svc.cluster.local wheezy_tcp@dns-test-service.dns-7179.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local jessie_udp@dns-test-service.dns-7179.svc.cluster.local jessie_tcp@dns-test-service.dns-7179.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local] Jan 26 21:27:19.683: INFO: Unable to read wheezy_udp@dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:19.689: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:19.694: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:19.699: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:19.738: INFO: Unable to read jessie_udp@dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:19.748: INFO: Unable to read jessie_tcp@dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:19.752: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:19.756: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:19.818: INFO: Lookups using dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1 failed for: [wheezy_udp@dns-test-service.dns-7179.svc.cluster.local wheezy_tcp@dns-test-service.dns-7179.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local jessie_udp@dns-test-service.dns-7179.svc.cluster.local jessie_tcp@dns-test-service.dns-7179.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local] Jan 26 21:27:24.683: INFO: Unable to read wheezy_udp@dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:24.689: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:24.694: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:24.698: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:24.729: INFO: Unable to read jessie_udp@dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:24.732: INFO: Unable to read jessie_tcp@dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:24.735: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:24.741: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:24.765: INFO: Lookups using dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1 failed for: [wheezy_udp@dns-test-service.dns-7179.svc.cluster.local wheezy_tcp@dns-test-service.dns-7179.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local jessie_udp@dns-test-service.dns-7179.svc.cluster.local jessie_tcp@dns-test-service.dns-7179.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local] Jan 26 21:27:29.683: INFO: Unable to read wheezy_udp@dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:29.689: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:29.693: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:29.698: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:29.731: INFO: Unable to read jessie_udp@dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:29.736: INFO: Unable to read jessie_tcp@dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:29.741: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:29.752: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:29.816: INFO: Lookups using dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1 failed for: [wheezy_udp@dns-test-service.dns-7179.svc.cluster.local wheezy_tcp@dns-test-service.dns-7179.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local jessie_udp@dns-test-service.dns-7179.svc.cluster.local jessie_tcp@dns-test-service.dns-7179.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local] Jan 26 21:27:34.681: INFO: Unable to read wheezy_udp@dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:34.691: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:34.698: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:34.703: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:34.762: INFO: Unable to read jessie_udp@dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:34.768: INFO: Unable to read jessie_tcp@dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:34.776: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:34.781: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:34.821: INFO: Lookups using dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1 failed for: [wheezy_udp@dns-test-service.dns-7179.svc.cluster.local wheezy_tcp@dns-test-service.dns-7179.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local jessie_udp@dns-test-service.dns-7179.svc.cluster.local jessie_tcp@dns-test-service.dns-7179.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local] Jan 26 21:27:39.680: INFO: Unable to read wheezy_udp@dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:39.687: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:39.691: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:39.695: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:39.775: INFO: Unable to read jessie_udp@dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:39.779: INFO: Unable to read jessie_tcp@dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:39.787: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:39.794: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local from pod dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1: the server could not find the requested resource (get pods dns-test-cf354946-220f-4e60-bcea-3fac86b348b1) Jan 26 21:27:39.835: INFO: Lookups using dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1 failed for: [wheezy_udp@dns-test-service.dns-7179.svc.cluster.local wheezy_tcp@dns-test-service.dns-7179.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local jessie_udp@dns-test-service.dns-7179.svc.cluster.local jessie_tcp@dns-test-service.dns-7179.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7179.svc.cluster.local] Jan 26 21:27:44.768: INFO: DNS probes using dns-7179/dns-test-cf354946-220f-4e60-bcea-3fac86b348b1 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:27:45.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7179" for this suite. • [SLOW TEST:66.166 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":76,"skipped":1276,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:27:45.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 26 21:27:46.165: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 26 21:27:48.182: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670866, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670866, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670866, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670866, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 26 21:27:50.217: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670866, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670866, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670866, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670866, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 26 21:27:52.229: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670866, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670866, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670866, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670866, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 26 21:27:54.189: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670866, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670866, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670866, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715670866, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 26 21:27:57.225: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:27:57.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-609" for this suite. STEP: Destroying namespace "webhook-609-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.947 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":77,"skipped":1276,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:27:57.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 26 21:28:15.666: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 26 21:28:15.674: INFO: Pod pod-with-prestop-exec-hook still exists Jan 26 21:28:17.674: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 26 21:28:17.681: INFO: Pod pod-with-prestop-exec-hook still exists Jan 26 21:28:19.674: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 26 21:28:19.683: INFO: Pod pod-with-prestop-exec-hook still exists Jan 26 21:28:21.675: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 26 21:28:21.682: INFO: Pod pod-with-prestop-exec-hook still exists Jan 26 21:28:23.674: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 26 21:28:23.683: INFO: Pod pod-with-prestop-exec-hook still exists Jan 26 21:28:25.674: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 26 21:28:25.683: INFO: Pod pod-with-prestop-exec-hook still exists Jan 26 21:28:27.675: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 26 21:28:27.680: INFO: Pod pod-with-prestop-exec-hook still exists Jan 26 21:28:29.674: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 26 21:28:29.684: INFO: Pod pod-with-prestop-exec-hook still exists Jan 26 21:28:31.674: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 26 21:28:31.679: INFO: Pod pod-with-prestop-exec-hook still exists Jan 26 21:28:33.674: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 26 21:28:33.680: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:28:33.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1756" for this suite. • [SLOW TEST:36.342 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":78,"skipped":1296,"failed":0} SSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:28:33.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jan 26 21:28:33.874: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4430 /api/v1/namespaces/watch-4430/configmaps/e2e-watch-test-configmap-a 2adb23c4-146a-40f3-ab08-a631ff9ca437 4539915 0 2020-01-26 21:28:33 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 26 21:28:33.875: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4430 /api/v1/namespaces/watch-4430/configmaps/e2e-watch-test-configmap-a 2adb23c4-146a-40f3-ab08-a631ff9ca437 4539915 0 2020-01-26 21:28:33 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jan 26 21:28:43.904: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4430 /api/v1/namespaces/watch-4430/configmaps/e2e-watch-test-configmap-a 2adb23c4-146a-40f3-ab08-a631ff9ca437 4539954 0 2020-01-26 21:28:33 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jan 26 21:28:43.905: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4430 /api/v1/namespaces/watch-4430/configmaps/e2e-watch-test-configmap-a 2adb23c4-146a-40f3-ab08-a631ff9ca437 4539954 0 2020-01-26 21:28:33 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jan 26 21:28:53.923: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4430 /api/v1/namespaces/watch-4430/configmaps/e2e-watch-test-configmap-a 2adb23c4-146a-40f3-ab08-a631ff9ca437 4539978 0 2020-01-26 21:28:33 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 26 21:28:53.924: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4430 /api/v1/namespaces/watch-4430/configmaps/e2e-watch-test-configmap-a 2adb23c4-146a-40f3-ab08-a631ff9ca437 4539978 0 2020-01-26 21:28:33 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jan 26 21:29:03.940: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4430 /api/v1/namespaces/watch-4430/configmaps/e2e-watch-test-configmap-a 2adb23c4-146a-40f3-ab08-a631ff9ca437 4540004 0 2020-01-26 21:28:33 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 26 21:29:03.941: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4430 /api/v1/namespaces/watch-4430/configmaps/e2e-watch-test-configmap-a 2adb23c4-146a-40f3-ab08-a631ff9ca437 4540004 0 2020-01-26 21:28:33 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jan 26 21:29:13.958: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4430 /api/v1/namespaces/watch-4430/configmaps/e2e-watch-test-configmap-b 9e2d13d1-bdb2-4d7b-b011-4f21af6356b8 4540028 0 2020-01-26 21:29:13 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 26 21:29:13.958: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4430 /api/v1/namespaces/watch-4430/configmaps/e2e-watch-test-configmap-b 9e2d13d1-bdb2-4d7b-b011-4f21af6356b8 4540028 0 2020-01-26 21:29:13 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jan 26 21:29:23.972: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4430 /api/v1/namespaces/watch-4430/configmaps/e2e-watch-test-configmap-b 9e2d13d1-bdb2-4d7b-b011-4f21af6356b8 4540050 0 2020-01-26 21:29:13 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 26 21:29:23.973: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4430 /api/v1/namespaces/watch-4430/configmaps/e2e-watch-test-configmap-b 9e2d13d1-bdb2-4d7b-b011-4f21af6356b8 4540050 0 2020-01-26 21:29:13 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:29:33.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4430" for this suite. • [SLOW TEST:60.254 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":79,"skipped":1300,"failed":0} SSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:29:33.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-3451 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 26 21:29:34.064: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 26 21:30:08.292: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-3451 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 26 21:30:08.292: INFO: >>> kubeConfig: /root/.kube/config I0126 21:30:08.402931 8 log.go:172] (0xc000b42000) (0xc0021fc3c0) Create stream I0126 21:30:08.403021 8 log.go:172] (0xc000b42000) (0xc0021fc3c0) Stream added, broadcasting: 1 I0126 21:30:08.408037 8 log.go:172] (0xc000b42000) Reply frame received for 1 I0126 21:30:08.408080 8 log.go:172] (0xc000b42000) (0xc001c0a460) Create stream I0126 21:30:08.408092 8 log.go:172] (0xc000b42000) (0xc001c0a460) Stream added, broadcasting: 3 I0126 21:30:08.409824 8 log.go:172] (0xc000b42000) Reply frame received for 3 I0126 21:30:08.409855 8 log.go:172] (0xc000b42000) (0xc0021fc500) Create stream I0126 21:30:08.409863 8 log.go:172] (0xc000b42000) (0xc0021fc500) Stream added, broadcasting: 5 I0126 21:30:08.411916 8 log.go:172] (0xc000b42000) Reply frame received for 5 I0126 21:30:08.506360 8 log.go:172] (0xc000b42000) Data frame received for 3 I0126 21:30:08.506490 8 log.go:172] (0xc001c0a460) (3) Data frame handling I0126 21:30:08.506598 8 log.go:172] (0xc001c0a460) (3) Data frame sent I0126 21:30:08.634826 8 log.go:172] (0xc000b42000) (0xc001c0a460) Stream removed, broadcasting: 3 I0126 21:30:08.635363 8 log.go:172] (0xc000b42000) Data frame received for 1 I0126 21:30:08.635531 8 log.go:172] (0xc000b42000) (0xc0021fc500) Stream removed, broadcasting: 5 I0126 21:30:08.635634 8 log.go:172] (0xc0021fc3c0) (1) Data frame handling I0126 21:30:08.635784 8 log.go:172] (0xc0021fc3c0) (1) Data frame sent I0126 21:30:08.635924 8 log.go:172] (0xc000b42000) (0xc0021fc3c0) Stream removed, broadcasting: 1 I0126 21:30:08.636078 8 log.go:172] (0xc000b42000) Go away received I0126 21:30:08.636923 8 log.go:172] (0xc000b42000) (0xc0021fc3c0) Stream removed, broadcasting: 1 I0126 21:30:08.636981 8 log.go:172] (0xc000b42000) (0xc001c0a460) Stream removed, broadcasting: 3 I0126 21:30:08.637005 8 log.go:172] (0xc000b42000) (0xc0021fc500) Stream removed, broadcasting: 5 Jan 26 21:30:08.637: INFO: Waiting for responses: map[] Jan 26 21:30:08.645: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-3451 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 26 21:30:08.646: INFO: >>> kubeConfig: /root/.kube/config I0126 21:30:08.712420 8 log.go:172] (0xc00309c2c0) (0xc001fccd20) Create stream I0126 21:30:08.712665 8 log.go:172] (0xc00309c2c0) (0xc001fccd20) Stream added, broadcasting: 1 I0126 21:30:08.717061 8 log.go:172] (0xc00309c2c0) Reply frame received for 1 I0126 21:30:08.717173 8 log.go:172] (0xc00309c2c0) (0xc001fccdc0) Create stream I0126 21:30:08.717190 8 log.go:172] (0xc00309c2c0) (0xc001fccdc0) Stream added, broadcasting: 3 I0126 21:30:08.719471 8 log.go:172] (0xc00309c2c0) Reply frame received for 3 I0126 21:30:08.719525 8 log.go:172] (0xc00309c2c0) (0xc0021fc820) Create stream I0126 21:30:08.719539 8 log.go:172] (0xc00309c2c0) (0xc0021fc820) Stream added, broadcasting: 5 I0126 21:30:08.720781 8 log.go:172] (0xc00309c2c0) Reply frame received for 5 I0126 21:30:08.802428 8 log.go:172] (0xc00309c2c0) Data frame received for 3 I0126 21:30:08.802470 8 log.go:172] (0xc001fccdc0) (3) Data frame handling I0126 21:30:08.802519 8 log.go:172] (0xc001fccdc0) (3) Data frame sent I0126 21:30:08.897872 8 log.go:172] (0xc00309c2c0) (0xc001fccdc0) Stream removed, broadcasting: 3 I0126 21:30:08.898082 8 log.go:172] (0xc00309c2c0) Data frame received for 1 I0126 21:30:08.898148 8 log.go:172] (0xc001fccd20) (1) Data frame handling I0126 21:30:08.898213 8 log.go:172] (0xc001fccd20) (1) Data frame sent I0126 21:30:08.898244 8 log.go:172] (0xc00309c2c0) (0xc001fccd20) Stream removed, broadcasting: 1 I0126 21:30:08.898619 8 log.go:172] (0xc00309c2c0) (0xc0021fc820) Stream removed, broadcasting: 5 I0126 21:30:08.898757 8 log.go:172] (0xc00309c2c0) Go away received I0126 21:30:08.899088 8 log.go:172] (0xc00309c2c0) (0xc001fccd20) Stream removed, broadcasting: 1 I0126 21:30:08.899181 8 log.go:172] (0xc00309c2c0) (0xc001fccdc0) Stream removed, broadcasting: 3 I0126 21:30:08.899216 8 log.go:172] (0xc00309c2c0) (0xc0021fc820) Stream removed, broadcasting: 5 Jan 26 21:30:08.899: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:30:08.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3451" for this suite. • [SLOW TEST:34.926 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":80,"skipped":1308,"failed":0} SSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:30:08.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-4076 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-4076 I0126 21:30:09.187054 8 runners.go:189] Created replication controller with name: externalname-service, namespace: services-4076, replica count: 2 I0126 21:30:12.238440 8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0126 21:30:15.239497 8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0126 21:30:18.240578 8 runners.go:189] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0126 21:30:21.241062 8 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 26 21:30:21.241: INFO: Creating new exec pod Jan 26 21:30:30.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4076 execpodmcb4t -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Jan 26 21:30:30.870: INFO: stderr: "I0126 21:30:30.588797 1663 log.go:172] (0xc0000f4fd0) (0xc0007641e0) Create stream\nI0126 21:30:30.589317 1663 log.go:172] (0xc0000f4fd0) (0xc0007641e0) Stream added, broadcasting: 1\nI0126 21:30:30.598266 1663 log.go:172] (0xc0000f4fd0) Reply frame received for 1\nI0126 21:30:30.598372 1663 log.go:172] (0xc0000f4fd0) (0xc0007d4000) Create stream\nI0126 21:30:30.598396 1663 log.go:172] (0xc0000f4fd0) (0xc0007d4000) Stream added, broadcasting: 3\nI0126 21:30:30.602799 1663 log.go:172] (0xc0000f4fd0) Reply frame received for 3\nI0126 21:30:30.602862 1663 log.go:172] (0xc0000f4fd0) (0xc0007ea000) Create stream\nI0126 21:30:30.602904 1663 log.go:172] (0xc0000f4fd0) (0xc0007ea000) Stream added, broadcasting: 5\nI0126 21:30:30.605495 1663 log.go:172] (0xc0000f4fd0) Reply frame received for 5\nI0126 21:30:30.718518 1663 log.go:172] (0xc0000f4fd0) Data frame received for 5\nI0126 21:30:30.718770 1663 log.go:172] (0xc0007ea000) (5) Data frame handling\nI0126 21:30:30.718788 1663 log.go:172] (0xc0007ea000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0126 21:30:30.729164 1663 log.go:172] (0xc0000f4fd0) Data frame received for 5\nI0126 21:30:30.729252 1663 log.go:172] (0xc0007ea000) (5) Data frame handling\nI0126 21:30:30.729260 1663 log.go:172] (0xc0007ea000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0126 21:30:30.848659 1663 log.go:172] (0xc0000f4fd0) (0xc0007ea000) Stream removed, broadcasting: 5\nI0126 21:30:30.848967 1663 log.go:172] (0xc0000f4fd0) (0xc0007d4000) Stream removed, broadcasting: 3\nI0126 21:30:30.849071 1663 log.go:172] (0xc0000f4fd0) Data frame received for 1\nI0126 21:30:30.849115 1663 log.go:172] (0xc0007641e0) (1) Data frame handling\nI0126 21:30:30.849152 1663 log.go:172] (0xc0007641e0) (1) Data frame sent\nI0126 21:30:30.849159 1663 log.go:172] (0xc0000f4fd0) (0xc0007641e0) Stream removed, broadcasting: 1\nI0126 21:30:30.849173 1663 log.go:172] (0xc0000f4fd0) Go away received\nI0126 21:30:30.850869 1663 log.go:172] (0xc0000f4fd0) (0xc0007641e0) Stream removed, broadcasting: 1\nI0126 21:30:30.850893 1663 log.go:172] (0xc0000f4fd0) (0xc0007d4000) Stream removed, broadcasting: 3\nI0126 21:30:30.850907 1663 log.go:172] (0xc0000f4fd0) (0xc0007ea000) Stream removed, broadcasting: 5\n" Jan 26 21:30:30.871: INFO: stdout: "" Jan 26 21:30:30.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4076 execpodmcb4t -- /bin/sh -x -c nc -zv -t -w 2 10.96.109.141 80' Jan 26 21:30:31.430: INFO: stderr: "I0126 21:30:31.206263 1679 log.go:172] (0xc000a86370) (0xc000029e00) Create stream\nI0126 21:30:31.206977 1679 log.go:172] (0xc000a86370) (0xc000029e00) Stream added, broadcasting: 1\nI0126 21:30:31.218265 1679 log.go:172] (0xc000a86370) Reply frame received for 1\nI0126 21:30:31.218486 1679 log.go:172] (0xc000a86370) (0xc00080a000) Create stream\nI0126 21:30:31.218563 1679 log.go:172] (0xc000a86370) (0xc00080a000) Stream added, broadcasting: 3\nI0126 21:30:31.219870 1679 log.go:172] (0xc000a86370) Reply frame received for 3\nI0126 21:30:31.219925 1679 log.go:172] (0xc000a86370) (0xc00080a0a0) Create stream\nI0126 21:30:31.219936 1679 log.go:172] (0xc000a86370) (0xc00080a0a0) Stream added, broadcasting: 5\nI0126 21:30:31.221043 1679 log.go:172] (0xc000a86370) Reply frame received for 5\nI0126 21:30:31.308043 1679 log.go:172] (0xc000a86370) Data frame received for 5\nI0126 21:30:31.308298 1679 log.go:172] (0xc00080a0a0) (5) Data frame handling\nI0126 21:30:31.308342 1679 log.go:172] (0xc00080a0a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.109.141 80\nI0126 21:30:31.308565 1679 log.go:172] (0xc000a86370) Data frame received for 5\nI0126 21:30:31.308593 1679 log.go:172] (0xc00080a0a0) (5) Data frame handling\nI0126 21:30:31.308616 1679 log.go:172] (0xc00080a0a0) (5) Data frame sent\nConnection to 10.96.109.141 80 port [tcp/http] succeeded!\nI0126 21:30:31.414360 1679 log.go:172] (0xc000a86370) Data frame received for 1\nI0126 21:30:31.414616 1679 log.go:172] (0xc000a86370) (0xc00080a0a0) Stream removed, broadcasting: 5\nI0126 21:30:31.414703 1679 log.go:172] (0xc000029e00) (1) Data frame handling\nI0126 21:30:31.414759 1679 log.go:172] (0xc000a86370) (0xc00080a000) Stream removed, broadcasting: 3\nI0126 21:30:31.414825 1679 log.go:172] (0xc000029e00) (1) Data frame sent\nI0126 21:30:31.414851 1679 log.go:172] (0xc000a86370) (0xc000029e00) Stream removed, broadcasting: 1\nI0126 21:30:31.414873 1679 log.go:172] (0xc000a86370) Go away received\nI0126 21:30:31.416309 1679 log.go:172] (0xc000a86370) (0xc000029e00) Stream removed, broadcasting: 1\nI0126 21:30:31.416355 1679 log.go:172] (0xc000a86370) (0xc00080a000) Stream removed, broadcasting: 3\nI0126 21:30:31.416383 1679 log.go:172] (0xc000a86370) (0xc00080a0a0) Stream removed, broadcasting: 5\n" Jan 26 21:30:31.431: INFO: stdout: "" Jan 26 21:30:31.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4076 execpodmcb4t -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 30728' Jan 26 21:30:31.747: INFO: stderr: "I0126 21:30:31.566077 1699 log.go:172] (0xc0009da0b0) (0xc000a9a1e0) Create stream\nI0126 21:30:31.566329 1699 log.go:172] (0xc0009da0b0) (0xc000a9a1e0) Stream added, broadcasting: 1\nI0126 21:30:31.572070 1699 log.go:172] (0xc0009da0b0) Reply frame received for 1\nI0126 21:30:31.572112 1699 log.go:172] (0xc0009da0b0) (0xc0004da640) Create stream\nI0126 21:30:31.572120 1699 log.go:172] (0xc0009da0b0) (0xc0004da640) Stream added, broadcasting: 3\nI0126 21:30:31.573391 1699 log.go:172] (0xc0009da0b0) Reply frame received for 3\nI0126 21:30:31.573423 1699 log.go:172] (0xc0009da0b0) (0xc0007014a0) Create stream\nI0126 21:30:31.573428 1699 log.go:172] (0xc0009da0b0) (0xc0007014a0) Stream added, broadcasting: 5\nI0126 21:30:31.575254 1699 log.go:172] (0xc0009da0b0) Reply frame received for 5\nI0126 21:30:31.651760 1699 log.go:172] (0xc0009da0b0) Data frame received for 5\nI0126 21:30:31.651913 1699 log.go:172] (0xc0007014a0) (5) Data frame handling\nI0126 21:30:31.651933 1699 log.go:172] (0xc0007014a0) (5) Data frame sent\nI0126 21:30:31.651942 1699 log.go:172] (0xc0009da0b0) Data frame received for 5\n+ nc -zv -t -w 2 10.96.2.250 30728\nI0126 21:30:31.651947 1699 log.go:172] (0xc0007014a0) (5) Data frame handling\nI0126 21:30:31.651988 1699 log.go:172] (0xc0007014a0) (5) Data frame sent\nConnection to 10.96.2.250 30728 port [tcp/30728] succeeded!\nI0126 21:30:31.738885 1699 log.go:172] (0xc0009da0b0) Data frame received for 1\nI0126 21:30:31.739034 1699 log.go:172] (0xc0009da0b0) (0xc0007014a0) Stream removed, broadcasting: 5\nI0126 21:30:31.739103 1699 log.go:172] (0xc000a9a1e0) (1) Data frame handling\nI0126 21:30:31.739129 1699 log.go:172] (0xc000a9a1e0) (1) Data frame sent\nI0126 21:30:31.739156 1699 log.go:172] (0xc0009da0b0) (0xc0004da640) Stream removed, broadcasting: 3\nI0126 21:30:31.739182 1699 log.go:172] (0xc0009da0b0) (0xc000a9a1e0) Stream removed, broadcasting: 1\nI0126 21:30:31.739205 1699 log.go:172] (0xc0009da0b0) Go away received\nI0126 21:30:31.739983 1699 log.go:172] (0xc0009da0b0) (0xc000a9a1e0) Stream removed, broadcasting: 1\nI0126 21:30:31.740023 1699 log.go:172] (0xc0009da0b0) (0xc0004da640) Stream removed, broadcasting: 3\nI0126 21:30:31.740041 1699 log.go:172] (0xc0009da0b0) (0xc0007014a0) Stream removed, broadcasting: 5\n" Jan 26 21:30:31.747: INFO: stdout: "" Jan 26 21:30:31.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4076 execpodmcb4t -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 30728' Jan 26 21:30:32.097: INFO: stderr: "I0126 21:30:31.911589 1714 log.go:172] (0xc000a7d810) (0xc000a44960) Create stream\nI0126 21:30:31.911947 1714 log.go:172] (0xc000a7d810) (0xc000a44960) Stream added, broadcasting: 1\nI0126 21:30:31.926531 1714 log.go:172] (0xc000a7d810) Reply frame received for 1\nI0126 21:30:31.926637 1714 log.go:172] (0xc000a7d810) (0xc00067a640) Create stream\nI0126 21:30:31.926660 1714 log.go:172] (0xc000a7d810) (0xc00067a640) Stream added, broadcasting: 3\nI0126 21:30:31.928660 1714 log.go:172] (0xc000a7d810) Reply frame received for 3\nI0126 21:30:31.928687 1714 log.go:172] (0xc000a7d810) (0xc00074b400) Create stream\nI0126 21:30:31.928694 1714 log.go:172] (0xc000a7d810) (0xc00074b400) Stream added, broadcasting: 5\nI0126 21:30:31.930701 1714 log.go:172] (0xc000a7d810) Reply frame received for 5\nI0126 21:30:32.004407 1714 log.go:172] (0xc000a7d810) Data frame received for 5\nI0126 21:30:32.004548 1714 log.go:172] (0xc00074b400) (5) Data frame handling\nI0126 21:30:32.004582 1714 log.go:172] (0xc00074b400) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.1.234 30728\nI0126 21:30:32.011260 1714 log.go:172] (0xc000a7d810) Data frame received for 5\nI0126 21:30:32.011289 1714 log.go:172] (0xc00074b400) (5) Data frame handling\nI0126 21:30:32.011309 1714 log.go:172] (0xc00074b400) (5) Data frame sent\nConnection to 10.96.1.234 30728 port [tcp/30728] succeeded!\nI0126 21:30:32.085619 1714 log.go:172] (0xc000a7d810) Data frame received for 1\nI0126 21:30:32.085669 1714 log.go:172] (0xc000a44960) (1) Data frame handling\nI0126 21:30:32.085713 1714 log.go:172] (0xc000a44960) (1) Data frame sent\nI0126 21:30:32.085880 1714 log.go:172] (0xc000a7d810) (0xc000a44960) Stream removed, broadcasting: 1\nI0126 21:30:32.087335 1714 log.go:172] (0xc000a7d810) (0xc00067a640) Stream removed, broadcasting: 3\nI0126 21:30:32.087395 1714 log.go:172] (0xc000a7d810) (0xc00074b400) Stream removed, broadcasting: 5\nI0126 21:30:32.087451 1714 log.go:172] (0xc000a7d810) (0xc000a44960) Stream removed, broadcasting: 1\nI0126 21:30:32.087461 1714 log.go:172] (0xc000a7d810) (0xc00067a640) Stream removed, broadcasting: 3\nI0126 21:30:32.087469 1714 log.go:172] (0xc000a7d810) (0xc00074b400) Stream removed, broadcasting: 5\n" Jan 26 21:30:32.097: INFO: stdout: "" Jan 26 21:30:32.097: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:30:32.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4076" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:23.241 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":81,"skipped":1314,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:30:32.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jan 26 21:30:32.289: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-847 /api/v1/namespaces/watch-847/configmaps/e2e-watch-test-watch-closed c0e801cc-bf06-45bd-9fab-55a6b078732e 4540324 0 2020-01-26 21:30:32 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 26 21:30:32.289: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-847 /api/v1/namespaces/watch-847/configmaps/e2e-watch-test-watch-closed c0e801cc-bf06-45bd-9fab-55a6b078732e 4540325 0 2020-01-26 21:30:32 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jan 26 21:30:32.303: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-847 /api/v1/namespaces/watch-847/configmaps/e2e-watch-test-watch-closed c0e801cc-bf06-45bd-9fab-55a6b078732e 4540326 0 2020-01-26 21:30:32 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 26 21:30:32.303: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-847 /api/v1/namespaces/watch-847/configmaps/e2e-watch-test-watch-closed c0e801cc-bf06-45bd-9fab-55a6b078732e 4540327 0 2020-01-26 21:30:32 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:30:32.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-847" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":82,"skipped":1338,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:30:32.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-9920.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-9920.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-9920.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-9920.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9920.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-9920.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-9920.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-9920.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-9920.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9920.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 26 21:30:46.646: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local from pod dns-9920/dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386: the server could not find the requested resource (get pods dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386) Jan 26 21:30:46.653: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local from pod dns-9920/dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386: the server could not find the requested resource (get pods dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386) Jan 26 21:30:46.661: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9920.svc.cluster.local from pod dns-9920/dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386: the server could not find the requested resource (get pods dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386) Jan 26 21:30:46.665: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9920.svc.cluster.local from pod dns-9920/dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386: the server could not find the requested resource (get pods dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386) Jan 26 21:30:46.679: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local from pod dns-9920/dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386: the server could not find the requested resource (get pods dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386) Jan 26 21:30:46.682: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local from pod dns-9920/dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386: the server could not find the requested resource (get pods dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386) Jan 26 21:30:46.687: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9920.svc.cluster.local from pod dns-9920/dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386: the server could not find the requested resource (get pods dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386) Jan 26 21:30:46.690: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9920.svc.cluster.local from pod dns-9920/dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386: the server could not find the requested resource (get pods dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386) Jan 26 21:30:46.707: INFO: Lookups using dns-9920/dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9920.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9920.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local jessie_udp@dns-test-service-2.dns-9920.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9920.svc.cluster.local] Jan 26 21:30:51.721: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local from pod dns-9920/dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386: the server could not find the requested resource (get pods dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386) Jan 26 21:30:51.730: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local from pod dns-9920/dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386: the server could not find the requested resource (get pods dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386) Jan 26 21:30:51.737: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9920.svc.cluster.local from pod dns-9920/dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386: the server could not find the requested resource (get pods dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386) Jan 26 21:30:51.744: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9920.svc.cluster.local from pod dns-9920/dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386: the server could not find the requested resource (get pods dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386) Jan 26 21:30:51.764: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local from pod dns-9920/dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386: the server could not find the requested resource (get pods dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386) Jan 26 21:30:51.770: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local from pod dns-9920/dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386: the server could not find the requested resource (get pods dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386) Jan 26 21:30:51.778: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9920.svc.cluster.local from pod dns-9920/dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386: the server could not find the requested resource (get pods dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386) Jan 26 21:30:51.784: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9920.svc.cluster.local from pod dns-9920/dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386: the server could not find the requested resource (get pods dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386) Jan 26 21:30:51.805: INFO: Lookups using dns-9920/dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9920.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9920.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local jessie_udp@dns-test-service-2.dns-9920.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9920.svc.cluster.local] Jan 26 21:30:56.716: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local from pod dns-9920/dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386: the server could not find the requested resource (get pods dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386) Jan 26 21:30:56.722: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local from pod dns-9920/dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386: the server could not find the requested resource (get pods dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386) Jan 26 21:30:56.725: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9920.svc.cluster.local from pod dns-9920/dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386: the server could not find the requested resource (get pods dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386) Jan 26 21:30:56.730: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9920.svc.cluster.local from pod dns-9920/dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386: the server could not find the requested resource (get pods dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386) Jan 26 21:30:56.743: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local from pod dns-9920/dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386: the server could not find the requested resource (get pods dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386) Jan 26 21:30:56.745: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local from pod dns-9920/dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386: the server could not find the requested resource (get pods dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386) Jan 26 21:30:56.749: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9920.svc.cluster.local from pod dns-9920/dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386: the server could not find the requested resource (get pods dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386) Jan 26 21:30:56.752: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9920.svc.cluster.local from pod dns-9920/dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386: the server could not find the requested resource (get pods dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386) Jan 26 21:30:56.759: INFO: Lookups using dns-9920/dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9920.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9920.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local jessie_udp@dns-test-service-2.dns-9920.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9920.svc.cluster.local] Jan 26 21:31:01.719: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local from pod dns-9920/dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386: the server could not find the requested resource (get pods dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386) Jan 26 21:31:01.727: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local from pod dns-9920/dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386: the server could not find the requested resource (get pods dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386) Jan 26 21:31:01.734: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9920.svc.cluster.local from pod dns-9920/dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386: the server could not find the requested resource (get pods dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386) Jan 26 21:31:01.741: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9920.svc.cluster.local from pod dns-9920/dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386: the server could not find the requested resource (get pods dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386) Jan 26 21:31:01.757: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local from pod dns-9920/dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386: the server could not find the requested resource (get pods dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386) Jan 26 21:31:01.762: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local from pod dns-9920/dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386: the server could not find the requested resource (get pods dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386) Jan 26 21:31:01.766: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9920.svc.cluster.local from pod dns-9920/dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386: the server could not find the requested resource (get pods dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386) Jan 26 21:31:01.770: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9920.svc.cluster.local from pod dns-9920/dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386: the server could not find the requested resource (get pods dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386) Jan 26 21:31:01.783: INFO: Lookups using dns-9920/dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9920.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9920.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local jessie_udp@dns-test-service-2.dns-9920.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9920.svc.cluster.local] Jan 26 21:31:06.716: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local from pod dns-9920/dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386: the server could not find the requested resource (get pods dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386) Jan 26 21:31:06.722: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local from pod dns-9920/dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386: the server could not find the requested resource (get pods dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386) Jan 26 21:31:06.728: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9920.svc.cluster.local from pod dns-9920/dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386: the server could not find the requested resource (get pods dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386) Jan 26 21:31:06.732: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9920.svc.cluster.local from pod dns-9920/dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386: the server could not find the requested resource (get pods dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386) Jan 26 21:31:06.742: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local from pod dns-9920/dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386: the server could not find the requested resource (get pods dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386) Jan 26 21:31:06.745: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local from pod dns-9920/dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386: the server could not find the requested resource (get pods dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386) Jan 26 21:31:06.748: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9920.svc.cluster.local from pod dns-9920/dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386: the server could not find the requested resource (get pods dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386) Jan 26 21:31:06.751: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9920.svc.cluster.local from pod dns-9920/dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386: the server could not find the requested resource (get pods dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386) Jan 26 21:31:06.759: INFO: Lookups using dns-9920/dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9920.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9920.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9920.svc.cluster.local jessie_udp@dns-test-service-2.dns-9920.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9920.svc.cluster.local] Jan 26 21:31:11.782: INFO: DNS probes using dns-9920/dns-test-e3cb9ce5-3be2-43ae-a86e-7c81dbb18386 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:31:11.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9920" for this suite. • [SLOW TEST:39.629 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":83,"skipped":1348,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:31:11.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Jan 26 21:31:12.100: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:31:26.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4954" for this suite. • [SLOW TEST:14.447 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":84,"skipped":1406,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:31:26.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Jan 26 21:31:26.515: INFO: >>> kubeConfig: /root/.kube/config Jan 26 21:31:29.927: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:31:44.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5037" for this suite. • [SLOW TEST:18.202 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":85,"skipped":1410,"failed":0} SS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:31:44.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jan 26 21:31:44.865: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-6182 /api/v1/namespaces/watch-6182/configmaps/e2e-watch-test-resource-version 03816522-1cf1-4ba6-b4b8-630ba85a2e4f 4540636 0 2020-01-26 21:31:44 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 26 21:31:44.865: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-6182 /api/v1/namespaces/watch-6182/configmaps/e2e-watch-test-resource-version 03816522-1cf1-4ba6-b4b8-630ba85a2e4f 4540637 0 2020-01-26 21:31:44 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:31:44.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6182" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":86,"skipped":1412,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:31:44.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Jan 26 21:31:51.607: INFO: Successfully updated pod "labelsupdate0fa978c5-25e7-4653-a5aa-1af6c5f29208" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:31:55.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7895" for this suite. • [SLOW TEST:10.805 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":87,"skipped":1429,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:31:55.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-05efd932-6a15-45db-a277-d7fbfd69d478 STEP: Creating a pod to test consume secrets Jan 26 21:31:55.846: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2c801450-6238-4836-855e-8e078bb3ff7b" in namespace "projected-1775" to be "success or failure" Jan 26 21:31:55.893: INFO: Pod "pod-projected-secrets-2c801450-6238-4836-855e-8e078bb3ff7b": Phase="Pending", Reason="", readiness=false. Elapsed: 46.905633ms Jan 26 21:31:57.903: INFO: Pod "pod-projected-secrets-2c801450-6238-4836-855e-8e078bb3ff7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057204616s Jan 26 21:31:59.937: INFO: Pod "pod-projected-secrets-2c801450-6238-4836-855e-8e078bb3ff7b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091155237s Jan 26 21:32:01.947: INFO: Pod "pod-projected-secrets-2c801450-6238-4836-855e-8e078bb3ff7b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.101005673s Jan 26 21:32:03.965: INFO: Pod "pod-projected-secrets-2c801450-6238-4836-855e-8e078bb3ff7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.118553204s STEP: Saw pod success Jan 26 21:32:03.965: INFO: Pod "pod-projected-secrets-2c801450-6238-4836-855e-8e078bb3ff7b" satisfied condition "success or failure" Jan 26 21:32:03.970: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-2c801450-6238-4836-855e-8e078bb3ff7b container projected-secret-volume-test: STEP: delete the pod Jan 26 21:32:04.545: INFO: Waiting for pod pod-projected-secrets-2c801450-6238-4836-855e-8e078bb3ff7b to disappear Jan 26 21:32:04.552: INFO: Pod pod-projected-secrets-2c801450-6238-4836-855e-8e078bb3ff7b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:32:04.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1775" for this suite. • [SLOW TEST:8.884 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":88,"skipped":1446,"failed":0} SSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:32:04.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Jan 26 21:32:04.760: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:32:18.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6864" for this suite. • [SLOW TEST:13.929 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":89,"skipped":1451,"failed":0} SSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:32:18.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Jan 26 21:32:18.601: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 26 21:32:18.724: INFO: Waiting for terminating namespaces to be deleted... Jan 26 21:32:18.728: INFO: Logging pods the kubelet thinks is on node jerma-node before test Jan 26 21:32:18.741: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Jan 26 21:32:18.741: INFO: Container kube-proxy ready: true, restart count 0 Jan 26 21:32:18.741: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Jan 26 21:32:18.741: INFO: Container weave ready: true, restart count 1 Jan 26 21:32:18.741: INFO: Container weave-npc ready: true, restart count 0 Jan 26 21:32:18.741: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Jan 26 21:32:18.780: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 26 21:32:18.780: INFO: Container kube-apiserver ready: true, restart count 1 Jan 26 21:32:18.780: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 26 21:32:18.780: INFO: Container etcd ready: true, restart count 1 Jan 26 21:32:18.780: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 26 21:32:18.780: INFO: Container coredns ready: true, restart count 0 Jan 26 21:32:18.780: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 26 21:32:18.780: INFO: Container coredns ready: true, restart count 0 Jan 26 21:32:18.780: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 26 21:32:18.780: INFO: Container kube-controller-manager ready: true, restart count 3 Jan 26 21:32:18.780: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Jan 26 21:32:18.780: INFO: Container kube-proxy ready: true, restart count 0 Jan 26 21:32:18.781: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Jan 26 21:32:18.781: INFO: Container weave ready: true, restart count 0 Jan 26 21:32:18.781: INFO: Container weave-npc ready: true, restart count 0 Jan 26 21:32:18.781: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 26 21:32:18.781: INFO: Container kube-scheduler ready: true, restart count 4 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-node STEP: verifying the node has the label node jerma-server-mvvl6gufaqub Jan 26 21:32:18.922: INFO: Pod coredns-6955765f44-bhnn4 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub Jan 26 21:32:18.922: INFO: Pod coredns-6955765f44-bwd85 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub Jan 26 21:32:18.922: INFO: Pod etcd-jerma-server-mvvl6gufaqub requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub Jan 26 21:32:18.922: INFO: Pod kube-apiserver-jerma-server-mvvl6gufaqub requesting resource cpu=250m on Node jerma-server-mvvl6gufaqub Jan 26 21:32:18.922: INFO: Pod kube-controller-manager-jerma-server-mvvl6gufaqub requesting resource cpu=200m on Node jerma-server-mvvl6gufaqub Jan 26 21:32:18.922: INFO: Pod kube-proxy-chkps requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub Jan 26 21:32:18.922: INFO: Pod kube-proxy-dsf66 requesting resource cpu=0m on Node jerma-node Jan 26 21:32:18.922: INFO: Pod kube-scheduler-jerma-server-mvvl6gufaqub requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub Jan 26 21:32:18.922: INFO: Pod weave-net-kz8lv requesting resource cpu=20m on Node jerma-node Jan 26 21:32:18.922: INFO: Pod weave-net-z6tjf requesting resource cpu=20m on Node jerma-server-mvvl6gufaqub STEP: Starting Pods to consume most of the cluster CPU. Jan 26 21:32:18.922: INFO: Creating a pod which consumes cpu=2261m on Node jerma-server-mvvl6gufaqub Jan 26 21:32:18.930: INFO: Creating a pod which consumes cpu=2786m on Node jerma-node STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-11b87b8b-2974-417e-a54c-a902260159fb.15ed8d3d63bb1add], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2559/filler-pod-11b87b8b-2974-417e-a54c-a902260159fb to jerma-node] STEP: Considering event: Type = [Normal], Name = [filler-pod-11b87b8b-2974-417e-a54c-a902260159fb.15ed8d3e63c165e4], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-11b87b8b-2974-417e-a54c-a902260159fb.15ed8d3f0606bf3c], Reason = [Created], Message = [Created container filler-pod-11b87b8b-2974-417e-a54c-a902260159fb] STEP: Considering event: Type = [Normal], Name = [filler-pod-11b87b8b-2974-417e-a54c-a902260159fb.15ed8d3f2d66c5a5], Reason = [Started], Message = [Started container filler-pod-11b87b8b-2974-417e-a54c-a902260159fb] STEP: Considering event: Type = [Normal], Name = [filler-pod-c8bf3951-acc8-4594-9852-e59389a84ec6.15ed8d3d63b2d013], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2559/filler-pod-c8bf3951-acc8-4594-9852-e59389a84ec6 to jerma-server-mvvl6gufaqub] STEP: Considering event: Type = [Normal], Name = [filler-pod-c8bf3951-acc8-4594-9852-e59389a84ec6.15ed8d3e66648b82], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-c8bf3951-acc8-4594-9852-e59389a84ec6.15ed8d3f4b5d78a8], Reason = [Created], Message = [Created container filler-pod-c8bf3951-acc8-4594-9852-e59389a84ec6] STEP: Considering event: Type = [Normal], Name = [filler-pod-c8bf3951-acc8-4594-9852-e59389a84ec6.15ed8d3f719a6085], Reason = [Started], Message = [Started container filler-pod-c8bf3951-acc8-4594-9852-e59389a84ec6] STEP: Considering event: Type = [Warning], Name = [additional-pod.15ed8d3fb9338fb5], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.15ed8d3fba4eb3ca], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.] STEP: removing the label node off the node jerma-server-mvvl6gufaqub STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-node STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:32:30.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2559" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:11.751 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":90,"skipped":1459,"failed":0} SSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:32:30.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7573.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7573.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7573.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7573.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 26 21:32:42.497: INFO: DNS probes using dns-test-63148359-69b4-4076-b79a-15ba943bf89b succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7573.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7573.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7573.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7573.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 26 21:32:56.720: INFO: File wheezy_udp@dns-test-service-3.dns-7573.svc.cluster.local from pod dns-7573/dns-test-3a3ac25f-c1e9-4c3e-8066-9773f0f270e7 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 26 21:32:56.727: INFO: File jessie_udp@dns-test-service-3.dns-7573.svc.cluster.local from pod dns-7573/dns-test-3a3ac25f-c1e9-4c3e-8066-9773f0f270e7 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 26 21:32:56.727: INFO: Lookups using dns-7573/dns-test-3a3ac25f-c1e9-4c3e-8066-9773f0f270e7 failed for: [wheezy_udp@dns-test-service-3.dns-7573.svc.cluster.local jessie_udp@dns-test-service-3.dns-7573.svc.cluster.local] Jan 26 21:33:01.737: INFO: File wheezy_udp@dns-test-service-3.dns-7573.svc.cluster.local from pod dns-7573/dns-test-3a3ac25f-c1e9-4c3e-8066-9773f0f270e7 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 26 21:33:01.744: INFO: File jessie_udp@dns-test-service-3.dns-7573.svc.cluster.local from pod dns-7573/dns-test-3a3ac25f-c1e9-4c3e-8066-9773f0f270e7 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 26 21:33:01.744: INFO: Lookups using dns-7573/dns-test-3a3ac25f-c1e9-4c3e-8066-9773f0f270e7 failed for: [wheezy_udp@dns-test-service-3.dns-7573.svc.cluster.local jessie_udp@dns-test-service-3.dns-7573.svc.cluster.local] Jan 26 21:33:06.737: INFO: File wheezy_udp@dns-test-service-3.dns-7573.svc.cluster.local from pod dns-7573/dns-test-3a3ac25f-c1e9-4c3e-8066-9773f0f270e7 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 26 21:33:06.741: INFO: File jessie_udp@dns-test-service-3.dns-7573.svc.cluster.local from pod dns-7573/dns-test-3a3ac25f-c1e9-4c3e-8066-9773f0f270e7 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 26 21:33:06.741: INFO: Lookups using dns-7573/dns-test-3a3ac25f-c1e9-4c3e-8066-9773f0f270e7 failed for: [wheezy_udp@dns-test-service-3.dns-7573.svc.cluster.local jessie_udp@dns-test-service-3.dns-7573.svc.cluster.local] Jan 26 21:33:11.739: INFO: File wheezy_udp@dns-test-service-3.dns-7573.svc.cluster.local from pod dns-7573/dns-test-3a3ac25f-c1e9-4c3e-8066-9773f0f270e7 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 26 21:33:11.749: INFO: Lookups using dns-7573/dns-test-3a3ac25f-c1e9-4c3e-8066-9773f0f270e7 failed for: [wheezy_udp@dns-test-service-3.dns-7573.svc.cluster.local] Jan 26 21:33:16.743: INFO: DNS probes using dns-test-3a3ac25f-c1e9-4c3e-8066-9773f0f270e7 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7573.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7573.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7573.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-7573.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 26 21:33:30.932: INFO: DNS probes using dns-test-9564588c-dc43-405f-b859-090a00ffd81f succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:33:30.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7573" for this suite. • [SLOW TEST:60.818 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":91,"skipped":1465,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:33:31.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0126 21:34:13.635057 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 26 21:34:13.635: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:34:13.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4548" for this suite. • [SLOW TEST:42.558 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":92,"skipped":1476,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:34:13.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-83fb9ee3-8693-491d-b44f-92e00bdb2d33 STEP: Creating a pod to test consume configMaps Jan 26 21:34:13.864: INFO: Waiting up to 5m0s for pod "pod-configmaps-8c62954b-44e6-40b3-89b3-cfc24004c25e" in namespace "configmap-6755" to be "success or failure" Jan 26 21:34:13.957: INFO: Pod "pod-configmaps-8c62954b-44e6-40b3-89b3-cfc24004c25e": Phase="Pending", Reason="", readiness=false. Elapsed: 92.897386ms Jan 26 21:34:15.968: INFO: Pod "pod-configmaps-8c62954b-44e6-40b3-89b3-cfc24004c25e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103772867s Jan 26 21:34:17.977: INFO: Pod "pod-configmaps-8c62954b-44e6-40b3-89b3-cfc24004c25e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113314637s Jan 26 21:34:20.027: INFO: Pod "pod-configmaps-8c62954b-44e6-40b3-89b3-cfc24004c25e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.16323047s Jan 26 21:34:22.539: INFO: Pod "pod-configmaps-8c62954b-44e6-40b3-89b3-cfc24004c25e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.675036681s Jan 26 21:34:24.560: INFO: Pod "pod-configmaps-8c62954b-44e6-40b3-89b3-cfc24004c25e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.696019972s STEP: Saw pod success Jan 26 21:34:24.560: INFO: Pod "pod-configmaps-8c62954b-44e6-40b3-89b3-cfc24004c25e" satisfied condition "success or failure" Jan 26 21:34:24.569: INFO: Trying to get logs from node jerma-node pod pod-configmaps-8c62954b-44e6-40b3-89b3-cfc24004c25e container configmap-volume-test: STEP: delete the pod Jan 26 21:34:27.087: INFO: Waiting for pod pod-configmaps-8c62954b-44e6-40b3-89b3-cfc24004c25e to disappear Jan 26 21:34:27.751: INFO: Pod pod-configmaps-8c62954b-44e6-40b3-89b3-cfc24004c25e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:34:27.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6755" for this suite. • [SLOW TEST:14.190 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":93,"skipped":1489,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:34:27.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 26 21:34:38.924: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:34:38.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7726" for this suite. • [SLOW TEST:11.135 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":94,"skipped":1499,"failed":0} [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:34:38.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-66d12978-7730-4359-aed5-e8200b52eafa Jan 26 21:34:39.180: INFO: Pod name my-hostname-basic-66d12978-7730-4359-aed5-e8200b52eafa: Found 0 pods out of 1 Jan 26 21:34:44.187: INFO: Pod name my-hostname-basic-66d12978-7730-4359-aed5-e8200b52eafa: Found 1 pods out of 1 Jan 26 21:34:44.187: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-66d12978-7730-4359-aed5-e8200b52eafa" are running Jan 26 21:34:46.215: INFO: Pod "my-hostname-basic-66d12978-7730-4359-aed5-e8200b52eafa-wqsk9" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-26 21:34:39 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-26 21:34:39 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-66d12978-7730-4359-aed5-e8200b52eafa]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-26 21:34:39 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-66d12978-7730-4359-aed5-e8200b52eafa]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-26 21:34:39 +0000 UTC Reason: Message:}]) Jan 26 21:34:46.216: INFO: Trying to dial the pod Jan 26 21:34:51.245: INFO: Controller my-hostname-basic-66d12978-7730-4359-aed5-e8200b52eafa: Got expected result from replica 1 [my-hostname-basic-66d12978-7730-4359-aed5-e8200b52eafa-wqsk9]: "my-hostname-basic-66d12978-7730-4359-aed5-e8200b52eafa-wqsk9", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:34:51.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2566" for this suite. • [SLOW TEST:12.280 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":95,"skipped":1499,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:34:51.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 26 21:34:51.463: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9a6ff306-6c82-4f4d-a43a-12d454d1e59c" in namespace "downward-api-7987" to be "success or failure" Jan 26 21:34:51.474: INFO: Pod "downwardapi-volume-9a6ff306-6c82-4f4d-a43a-12d454d1e59c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.365849ms Jan 26 21:34:53.479: INFO: Pod "downwardapi-volume-9a6ff306-6c82-4f4d-a43a-12d454d1e59c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015392614s Jan 26 21:34:55.489: INFO: Pod "downwardapi-volume-9a6ff306-6c82-4f4d-a43a-12d454d1e59c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025563254s Jan 26 21:34:57.500: INFO: Pod "downwardapi-volume-9a6ff306-6c82-4f4d-a43a-12d454d1e59c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036543151s Jan 26 21:34:59.511: INFO: Pod "downwardapi-volume-9a6ff306-6c82-4f4d-a43a-12d454d1e59c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047378149s Jan 26 21:35:01.520: INFO: Pod "downwardapi-volume-9a6ff306-6c82-4f4d-a43a-12d454d1e59c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.05647195s STEP: Saw pod success Jan 26 21:35:01.520: INFO: Pod "downwardapi-volume-9a6ff306-6c82-4f4d-a43a-12d454d1e59c" satisfied condition "success or failure" Jan 26 21:35:01.563: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-9a6ff306-6c82-4f4d-a43a-12d454d1e59c container client-container: STEP: delete the pod Jan 26 21:35:01.793: INFO: Waiting for pod downwardapi-volume-9a6ff306-6c82-4f4d-a43a-12d454d1e59c to disappear Jan 26 21:35:01.804: INFO: Pod downwardapi-volume-9a6ff306-6c82-4f4d-a43a-12d454d1e59c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 26 21:35:01.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7987" for this suite. • [SLOW TEST:10.564 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":96,"skipped":1500,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 26 21:35:01.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 26 21:35:01.981: INFO: (0) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/:
alternatives.log
apt/
... (200; 16.849845ms)
Jan 26 21:35:01.985: INFO: (1) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 4.152513ms)
Jan 26 21:35:01.988: INFO: (2) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.107008ms)
Jan 26 21:35:01.992: INFO: (3) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.607076ms)
Jan 26 21:35:01.995: INFO: (4) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 2.881668ms)
Jan 26 21:35:01.999: INFO: (5) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.759976ms)
Jan 26 21:35:02.038: INFO: (6) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 38.992402ms)
Jan 26 21:35:02.044: INFO: (7) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 5.955515ms)
Jan 26 21:35:02.048: INFO: (8) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.995643ms)
Jan 26 21:35:02.052: INFO: (9) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 4.036343ms)
Jan 26 21:35:02.055: INFO: (10) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.209194ms)
Jan 26 21:35:02.060: INFO: (11) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 4.715999ms)
Jan 26 21:35:02.063: INFO: (12) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.411155ms)
Jan 26 21:35:02.067: INFO: (13) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.542652ms)
Jan 26 21:35:02.070: INFO: (14) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 2.891349ms)
Jan 26 21:35:02.074: INFO: (15) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 4.319318ms)
Jan 26 21:35:02.078: INFO: (16) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.312552ms)
Jan 26 21:35:02.080: INFO: (17) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 2.591387ms)
Jan 26 21:35:02.083: INFO: (18) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 2.858508ms)
Jan 26 21:35:02.086: INFO: (19) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.305239ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:35:02.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-6587" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource  [Conformance]","total":278,"completed":97,"skipped":1526,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:35:02.096: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 26 21:35:02.232: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jan 26 21:35:05.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8361 create -f -'
Jan 26 21:35:08.023: INFO: stderr: ""
Jan 26 21:35:08.023: INFO: stdout: "e2e-test-crd-publish-openapi-8000-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Jan 26 21:35:08.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8361 delete e2e-test-crd-publish-openapi-8000-crds test-cr'
Jan 26 21:35:08.174: INFO: stderr: ""
Jan 26 21:35:08.174: INFO: stdout: "e2e-test-crd-publish-openapi-8000-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
Jan 26 21:35:08.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8361 apply -f -'
Jan 26 21:35:08.402: INFO: stderr: ""
Jan 26 21:35:08.402: INFO: stdout: "e2e-test-crd-publish-openapi-8000-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Jan 26 21:35:08.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8361 delete e2e-test-crd-publish-openapi-8000-crds test-cr'
Jan 26 21:35:08.605: INFO: stderr: ""
Jan 26 21:35:08.605: INFO: stdout: "e2e-test-crd-publish-openapi-8000-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Jan 26 21:35:08.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8000-crds'
Jan 26 21:35:09.081: INFO: stderr: ""
Jan 26 21:35:09.081: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8000-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:35:12.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8361" for this suite.

• [SLOW TEST:10.843 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":98,"skipped":1571,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:35:12.941: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 26 21:35:13.077: INFO: Waiting up to 5m0s for pod "pod-92f06cf0-be83-4a7d-afd5-6c869825d652" in namespace "emptydir-8902" to be "success or failure"
Jan 26 21:35:13.211: INFO: Pod "pod-92f06cf0-be83-4a7d-afd5-6c869825d652": Phase="Pending", Reason="", readiness=false. Elapsed: 134.134395ms
Jan 26 21:35:15.218: INFO: Pod "pod-92f06cf0-be83-4a7d-afd5-6c869825d652": Phase="Pending", Reason="", readiness=false. Elapsed: 2.140881807s
Jan 26 21:35:17.225: INFO: Pod "pod-92f06cf0-be83-4a7d-afd5-6c869825d652": Phase="Pending", Reason="", readiness=false. Elapsed: 4.148231131s
Jan 26 21:35:19.231: INFO: Pod "pod-92f06cf0-be83-4a7d-afd5-6c869825d652": Phase="Pending", Reason="", readiness=false. Elapsed: 6.154084747s
Jan 26 21:35:21.238: INFO: Pod "pod-92f06cf0-be83-4a7d-afd5-6c869825d652": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.160758493s
STEP: Saw pod success
Jan 26 21:35:21.238: INFO: Pod "pod-92f06cf0-be83-4a7d-afd5-6c869825d652" satisfied condition "success or failure"
Jan 26 21:35:21.242: INFO: Trying to get logs from node jerma-node pod pod-92f06cf0-be83-4a7d-afd5-6c869825d652 container test-container: 
STEP: delete the pod
Jan 26 21:35:21.290: INFO: Waiting for pod pod-92f06cf0-be83-4a7d-afd5-6c869825d652 to disappear
Jan 26 21:35:21.299: INFO: Pod pod-92f06cf0-be83-4a7d-afd5-6c869825d652 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:35:21.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8902" for this suite.

• [SLOW TEST:8.377 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":99,"skipped":1600,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:35:21.320: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override command
Jan 26 21:35:21.501: INFO: Waiting up to 5m0s for pod "client-containers-52141c39-6c05-4050-8662-e8c3110f8535" in namespace "containers-9782" to be "success or failure"
Jan 26 21:35:21.638: INFO: Pod "client-containers-52141c39-6c05-4050-8662-e8c3110f8535": Phase="Pending", Reason="", readiness=false. Elapsed: 136.777176ms
Jan 26 21:35:23.651: INFO: Pod "client-containers-52141c39-6c05-4050-8662-e8c3110f8535": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15019653s
Jan 26 21:35:25.659: INFO: Pod "client-containers-52141c39-6c05-4050-8662-e8c3110f8535": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157723145s
Jan 26 21:35:27.669: INFO: Pod "client-containers-52141c39-6c05-4050-8662-e8c3110f8535": Phase="Pending", Reason="", readiness=false. Elapsed: 6.168011135s
Jan 26 21:35:29.677: INFO: Pod "client-containers-52141c39-6c05-4050-8662-e8c3110f8535": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.176397036s
STEP: Saw pod success
Jan 26 21:35:29.677: INFO: Pod "client-containers-52141c39-6c05-4050-8662-e8c3110f8535" satisfied condition "success or failure"
Jan 26 21:35:29.682: INFO: Trying to get logs from node jerma-node pod client-containers-52141c39-6c05-4050-8662-e8c3110f8535 container test-container: 
STEP: delete the pod
Jan 26 21:35:29.741: INFO: Waiting for pod client-containers-52141c39-6c05-4050-8662-e8c3110f8535 to disappear
Jan 26 21:35:29.768: INFO: Pod client-containers-52141c39-6c05-4050-8662-e8c3110f8535 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:35:29.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9782" for this suite.

• [SLOW TEST:8.494 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":100,"skipped":1632,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:35:29.815: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 26 21:35:29.882: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:35:38.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7927" for this suite.

• [SLOW TEST:8.383 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":101,"skipped":1648,"failed":0}
SS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:35:38.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:35:45.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9920" for this suite.

• [SLOW TEST:7.213 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":102,"skipped":1650,"failed":0}
SS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:35:45.412: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: getting the auto-created API token
STEP: reading a file in the container
Jan 26 21:35:54.068: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1480 pod-service-account-ae36ade2-5e15-4ab9-8ea8-8ecb8756aedb -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Jan 26 21:35:54.383: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1480 pod-service-account-ae36ade2-5e15-4ab9-8ea8-8ecb8756aedb -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Jan 26 21:35:54.765: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1480 pod-service-account-ae36ade2-5e15-4ab9-8ea8-8ecb8756aedb -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:35:55.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-1480" for this suite.

• [SLOW TEST:9.682 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":278,"completed":103,"skipped":1652,"failed":0}
SS
------------------------------
[sig-cli] Kubectl client Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:35:55.096: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1576
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 26 21:35:55.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-2616'
Jan 26 21:35:55.457: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 26 21:35:55.457: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created
[AfterEach] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1582
Jan 26 21:35:55.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-2616'
Jan 26 21:35:55.642: INFO: stderr: ""
Jan 26 21:35:55.642: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:35:55.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2616" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image  [Conformance]","total":278,"completed":104,"skipped":1654,"failed":0}
SSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:35:55.650: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1841
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 26 21:35:55.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-4641'
Jan 26 21:35:55.921: INFO: stderr: ""
Jan 26 21:35:55.921: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod was created
[AfterEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1846
Jan 26 21:35:55.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-4641'
Jan 26 21:36:02.368: INFO: stderr: ""
Jan 26 21:36:02.369: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:36:02.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4641" for this suite.

• [SLOW TEST:6.788 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1837
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":278,"completed":105,"skipped":1660,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:36:02.441: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-secret-26jj
STEP: Creating a pod to test atomic-volume-subpath
Jan 26 21:36:02.636: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-26jj" in namespace "subpath-9409" to be "success or failure"
Jan 26 21:36:02.702: INFO: Pod "pod-subpath-test-secret-26jj": Phase="Pending", Reason="", readiness=false. Elapsed: 65.494203ms
Jan 26 21:36:04.714: INFO: Pod "pod-subpath-test-secret-26jj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078263765s
Jan 26 21:36:06.724: INFO: Pod "pod-subpath-test-secret-26jj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08832476s
Jan 26 21:36:08.735: INFO: Pod "pod-subpath-test-secret-26jj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.098518114s
Jan 26 21:36:10.742: INFO: Pod "pod-subpath-test-secret-26jj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.106252359s
Jan 26 21:36:12.750: INFO: Pod "pod-subpath-test-secret-26jj": Phase="Running", Reason="", readiness=true. Elapsed: 10.114400007s
Jan 26 21:36:14.758: INFO: Pod "pod-subpath-test-secret-26jj": Phase="Running", Reason="", readiness=true. Elapsed: 12.121538048s
Jan 26 21:36:16.766: INFO: Pod "pod-subpath-test-secret-26jj": Phase="Running", Reason="", readiness=true. Elapsed: 14.12996637s
Jan 26 21:36:18.777: INFO: Pod "pod-subpath-test-secret-26jj": Phase="Running", Reason="", readiness=true. Elapsed: 16.140538396s
Jan 26 21:36:20.800: INFO: Pod "pod-subpath-test-secret-26jj": Phase="Running", Reason="", readiness=true. Elapsed: 18.164102674s
Jan 26 21:36:22.805: INFO: Pod "pod-subpath-test-secret-26jj": Phase="Running", Reason="", readiness=true. Elapsed: 20.16938987s
Jan 26 21:36:24.813: INFO: Pod "pod-subpath-test-secret-26jj": Phase="Running", Reason="", readiness=true. Elapsed: 22.176793348s
Jan 26 21:36:26.822: INFO: Pod "pod-subpath-test-secret-26jj": Phase="Running", Reason="", readiness=true. Elapsed: 24.185495176s
Jan 26 21:36:28.833: INFO: Pod "pod-subpath-test-secret-26jj": Phase="Running", Reason="", readiness=true. Elapsed: 26.19704424s
Jan 26 21:36:30.843: INFO: Pod "pod-subpath-test-secret-26jj": Phase="Running", Reason="", readiness=true. Elapsed: 28.206563498s
Jan 26 21:36:32.866: INFO: Pod "pod-subpath-test-secret-26jj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.230103526s
STEP: Saw pod success
Jan 26 21:36:32.866: INFO: Pod "pod-subpath-test-secret-26jj" satisfied condition "success or failure"
Jan 26 21:36:32.872: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-secret-26jj container test-container-subpath-secret-26jj: 
STEP: delete the pod
Jan 26 21:36:33.134: INFO: Waiting for pod pod-subpath-test-secret-26jj to disappear
Jan 26 21:36:33.144: INFO: Pod pod-subpath-test-secret-26jj no longer exists
STEP: Deleting pod pod-subpath-test-secret-26jj
Jan 26 21:36:33.144: INFO: Deleting pod "pod-subpath-test-secret-26jj" in namespace "subpath-9409"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:36:33.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9409" for this suite.

• [SLOW TEST:30.730 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":106,"skipped":1702,"failed":0}
SSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:36:33.173: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-5075
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-5075
STEP: creating replication controller externalsvc in namespace services-5075
I0126 21:36:33.433205       8 runners.go:189] Created replication controller with name: externalsvc, namespace: services-5075, replica count: 2
I0126 21:36:36.485675       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0126 21:36:39.486844       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0126 21:36:42.488343       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0126 21:36:45.489294       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
Jan 26 21:36:45.537: INFO: Creating new exec pod
Jan 26 21:36:53.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5075 execpodkvdht -- /bin/sh -x -c nslookup clusterip-service'
Jan 26 21:36:54.140: INFO: stderr: "I0126 21:36:53.857131    1992 log.go:172] (0xc0008ec000) (0xc0008a2000) Create stream\nI0126 21:36:53.857602    1992 log.go:172] (0xc0008ec000) (0xc0008a2000) Stream added, broadcasting: 1\nI0126 21:36:53.872363    1992 log.go:172] (0xc0008ec000) Reply frame received for 1\nI0126 21:36:53.872611    1992 log.go:172] (0xc0008ec000) (0xc0001f65a0) Create stream\nI0126 21:36:53.872642    1992 log.go:172] (0xc0008ec000) (0xc0001f65a0) Stream added, broadcasting: 3\nI0126 21:36:53.876752    1992 log.go:172] (0xc0008ec000) Reply frame received for 3\nI0126 21:36:53.876840    1992 log.go:172] (0xc0008ec000) (0xc0005c2f00) Create stream\nI0126 21:36:53.876857    1992 log.go:172] (0xc0008ec000) (0xc0005c2f00) Stream added, broadcasting: 5\nI0126 21:36:53.879472    1992 log.go:172] (0xc0008ec000) Reply frame received for 5\nI0126 21:36:54.003226    1992 log.go:172] (0xc0008ec000) Data frame received for 5\nI0126 21:36:54.003478    1992 log.go:172] (0xc0005c2f00) (5) Data frame handling\nI0126 21:36:54.003518    1992 log.go:172] (0xc0005c2f00) (5) Data frame sent\n+ nslookup clusterip-service\nI0126 21:36:54.032476    1992 log.go:172] (0xc0008ec000) Data frame received for 3\nI0126 21:36:54.032604    1992 log.go:172] (0xc0001f65a0) (3) Data frame handling\nI0126 21:36:54.032626    1992 log.go:172] (0xc0001f65a0) (3) Data frame sent\nI0126 21:36:54.035088    1992 log.go:172] (0xc0008ec000) Data frame received for 3\nI0126 21:36:54.035103    1992 log.go:172] (0xc0001f65a0) (3) Data frame handling\nI0126 21:36:54.035124    1992 log.go:172] (0xc0001f65a0) (3) Data frame sent\nI0126 21:36:54.124844    1992 log.go:172] (0xc0008ec000) (0xc0001f65a0) Stream removed, broadcasting: 3\nI0126 21:36:54.125258    1992 log.go:172] (0xc0008ec000) Data frame received for 1\nI0126 21:36:54.125295    1992 log.go:172] (0xc0008a2000) (1) Data frame handling\nI0126 21:36:54.125317    1992 log.go:172] (0xc0008a2000) (1) Data frame sent\nI0126 21:36:54.125337    1992 log.go:172] (0xc0008ec000) (0xc0008a2000) Stream removed, broadcasting: 1\nI0126 21:36:54.125981    1992 log.go:172] (0xc0008ec000) (0xc0005c2f00) Stream removed, broadcasting: 5\nI0126 21:36:54.126183    1992 log.go:172] (0xc0008ec000) Go away received\nI0126 21:36:54.126435    1992 log.go:172] (0xc0008ec000) (0xc0008a2000) Stream removed, broadcasting: 1\nI0126 21:36:54.126449    1992 log.go:172] (0xc0008ec000) (0xc0001f65a0) Stream removed, broadcasting: 3\nI0126 21:36:54.126461    1992 log.go:172] (0xc0008ec000) (0xc0005c2f00) Stream removed, broadcasting: 5\n"
Jan 26 21:36:54.141: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-5075.svc.cluster.local\tcanonical name = externalsvc.services-5075.svc.cluster.local.\nName:\texternalsvc.services-5075.svc.cluster.local\nAddress: 10.96.165.67\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-5075, will wait for the garbage collector to delete the pods
Jan 26 21:36:54.209: INFO: Deleting ReplicationController externalsvc took: 6.943528ms
Jan 26 21:36:54.510: INFO: Terminating ReplicationController externalsvc pods took: 300.441935ms
Jan 26 21:37:13.243: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:37:13.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5075" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:40.142 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":107,"skipped":1706,"failed":0}
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:37:13.316: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 26 21:37:13.373: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
Jan 26 21:37:13.720: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-26T21:37:13Z generation:1 name:name1 resourceVersion:4542191 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:2a24d722-0b3c-4e89-bd8d-d3f5a5b23452] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
Jan 26 21:37:23.731: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-26T21:37:23Z generation:1 name:name2 resourceVersion:4542237 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:be7d9dea-2012-4003-8a0a-700d53b2b41a] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
Jan 26 21:37:33.742: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-26T21:37:13Z generation:2 name:name1 resourceVersion:4542257 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:2a24d722-0b3c-4e89-bd8d-d3f5a5b23452] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
Jan 26 21:37:43.754: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-26T21:37:23Z generation:2 name:name2 resourceVersion:4542281 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:be7d9dea-2012-4003-8a0a-700d53b2b41a] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
Jan 26 21:37:53.770: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-26T21:37:13Z generation:2 name:name1 resourceVersion:4542307 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:2a24d722-0b3c-4e89-bd8d-d3f5a5b23452] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
Jan 26 21:38:03.800: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-26T21:37:23Z generation:2 name:name2 resourceVersion:4542331 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:be7d9dea-2012-4003-8a0a-700d53b2b41a] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:38:14.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-3224" for this suite.

• [SLOW TEST:61.029 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41
    watch on custom resource definition objects [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":108,"skipped":1706,"failed":0}
SSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:38:14.345: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Jan 26 21:38:14.447: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 26 21:38:14.471: INFO: Waiting for terminating namespaces to be deleted...
Jan 26 21:38:14.475: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan 26 21:38:14.503: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan 26 21:38:14.504: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 26 21:38:14.504: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan 26 21:38:14.504: INFO: 	Container weave ready: true, restart count 1
Jan 26 21:38:14.504: INFO: 	Container weave-npc ready: true, restart count 0
Jan 26 21:38:14.504: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan 26 21:38:14.532: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 26 21:38:14.532: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan 26 21:38:14.532: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 26 21:38:14.532: INFO: 	Container etcd ready: true, restart count 1
Jan 26 21:38:14.533: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 26 21:38:14.533: INFO: 	Container coredns ready: true, restart count 0
Jan 26 21:38:14.533: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 26 21:38:14.533: INFO: 	Container coredns ready: true, restart count 0
Jan 26 21:38:14.533: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan 26 21:38:14.533: INFO: 	Container weave ready: true, restart count 0
Jan 26 21:38:14.533: INFO: 	Container weave-npc ready: true, restart count 0
Jan 26 21:38:14.533: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 26 21:38:14.533: INFO: 	Container kube-controller-manager ready: true, restart count 3
Jan 26 21:38:14.533: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan 26 21:38:14.533: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 26 21:38:14.533: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 26 21:38:14.533: INFO: 	Container kube-scheduler ready: true, restart count 4
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-90f66819-eb58-4b15-9408-c9b72637b45d 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-90f66819-eb58-4b15-9408-c9b72637b45d off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-90f66819-eb58-4b15-9408-c9b72637b45d
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:38:30.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5866" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:16.530 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":278,"completed":109,"skipped":1711,"failed":0}
SSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:38:30.876: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the initial replication controller
Jan 26 21:38:31.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7710'
Jan 26 21:38:31.526: INFO: stderr: ""
Jan 26 21:38:31.526: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 26 21:38:31.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7710'
Jan 26 21:38:31.698: INFO: stderr: ""
Jan 26 21:38:31.698: INFO: stdout: "update-demo-nautilus-k8bmc update-demo-nautilus-zw9lf "
Jan 26 21:38:31.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k8bmc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7710'
Jan 26 21:38:31.832: INFO: stderr: ""
Jan 26 21:38:31.832: INFO: stdout: ""
Jan 26 21:38:31.832: INFO: update-demo-nautilus-k8bmc is created but not running
Jan 26 21:38:36.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7710'
Jan 26 21:38:36.980: INFO: stderr: ""
Jan 26 21:38:36.980: INFO: stdout: "update-demo-nautilus-k8bmc update-demo-nautilus-zw9lf "
Jan 26 21:38:36.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k8bmc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7710'
Jan 26 21:38:37.879: INFO: stderr: ""
Jan 26 21:38:37.879: INFO: stdout: ""
Jan 26 21:38:37.879: INFO: update-demo-nautilus-k8bmc is created but not running
Jan 26 21:38:42.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7710'
Jan 26 21:38:43.015: INFO: stderr: ""
Jan 26 21:38:43.015: INFO: stdout: "update-demo-nautilus-k8bmc update-demo-nautilus-zw9lf "
Jan 26 21:38:43.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k8bmc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7710'
Jan 26 21:38:43.142: INFO: stderr: ""
Jan 26 21:38:43.143: INFO: stdout: "true"
Jan 26 21:38:43.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k8bmc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7710'
Jan 26 21:38:43.253: INFO: stderr: ""
Jan 26 21:38:43.253: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 26 21:38:43.253: INFO: validating pod update-demo-nautilus-k8bmc
Jan 26 21:38:43.262: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 26 21:38:43.262: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 26 21:38:43.262: INFO: update-demo-nautilus-k8bmc is verified up and running
Jan 26 21:38:43.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zw9lf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7710'
Jan 26 21:38:43.398: INFO: stderr: ""
Jan 26 21:38:43.398: INFO: stdout: "true"
Jan 26 21:38:43.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zw9lf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7710'
Jan 26 21:38:43.477: INFO: stderr: ""
Jan 26 21:38:43.477: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 26 21:38:43.477: INFO: validating pod update-demo-nautilus-zw9lf
Jan 26 21:38:43.484: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 26 21:38:43.484: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 26 21:38:43.484: INFO: update-demo-nautilus-zw9lf is verified up and running
STEP: rolling-update to new replication controller
Jan 26 21:38:43.488: INFO: scanned /root for discovery docs: 
Jan 26 21:38:43.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-7710'
Jan 26 21:39:13.858: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 26 21:39:13.859: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 26 21:39:13.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7710'
Jan 26 21:39:14.119: INFO: stderr: ""
Jan 26 21:39:14.119: INFO: stdout: "update-demo-kitten-6vnhz update-demo-kitten-sjcsd "
Jan 26 21:39:14.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-6vnhz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7710'
Jan 26 21:39:14.206: INFO: stderr: ""
Jan 26 21:39:14.206: INFO: stdout: "true"
Jan 26 21:39:14.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-6vnhz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7710'
Jan 26 21:39:14.286: INFO: stderr: ""
Jan 26 21:39:14.286: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 26 21:39:14.286: INFO: validating pod update-demo-kitten-6vnhz
Jan 26 21:39:14.292: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 26 21:39:14.292: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 26 21:39:14.292: INFO: update-demo-kitten-6vnhz is verified up and running
Jan 26 21:39:14.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-sjcsd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7710'
Jan 26 21:39:14.384: INFO: stderr: ""
Jan 26 21:39:14.384: INFO: stdout: "true"
Jan 26 21:39:14.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-sjcsd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7710'
Jan 26 21:39:14.486: INFO: stderr: ""
Jan 26 21:39:14.486: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 26 21:39:14.486: INFO: validating pod update-demo-kitten-sjcsd
Jan 26 21:39:14.494: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 26 21:39:14.494: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 26 21:39:14.494: INFO: update-demo-kitten-sjcsd is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:39:14.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7710" for this suite.

• [SLOW TEST:43.631 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller  [Conformance]","total":278,"completed":110,"skipped":1714,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:39:14.510: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Jan 26 21:39:14.615: INFO: >>> kubeConfig: /root/.kube/config
Jan 26 21:39:18.056: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:39:33.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6129" for this suite.

• [SLOW TEST:19.044 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":111,"skipped":1739,"failed":0}
SSS
------------------------------
[sig-cli] Kubectl client Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:39:33.554: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1672
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 26 21:39:33.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-1818'
Jan 26 21:39:33.888: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 26 21:39:33.888: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
Jan 26 21:39:33.910: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Jan 26 21:39:33.998: INFO: scanned /root for discovery docs: 
Jan 26 21:39:33.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-1818'
Jan 26 21:39:55.294: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 26 21:39:55.294: INFO: stdout: "Created e2e-test-httpd-rc-469e4fe054ff47324f805f92000ee42c\nScaling up e2e-test-httpd-rc-469e4fe054ff47324f805f92000ee42c from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-469e4fe054ff47324f805f92000ee42c up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-469e4fe054ff47324f805f92000ee42c to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
Jan 26 21:39:55.294: INFO: stdout: "Created e2e-test-httpd-rc-469e4fe054ff47324f805f92000ee42c\nScaling up e2e-test-httpd-rc-469e4fe054ff47324f805f92000ee42c from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-469e4fe054ff47324f805f92000ee42c up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-469e4fe054ff47324f805f92000ee42c to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up.
Jan 26 21:39:55.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-1818'
Jan 26 21:39:55.523: INFO: stderr: ""
Jan 26 21:39:55.523: INFO: stdout: "e2e-test-httpd-rc-469e4fe054ff47324f805f92000ee42c-d47wf "
Jan 26 21:39:55.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-469e4fe054ff47324f805f92000ee42c-d47wf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1818'
Jan 26 21:39:55.612: INFO: stderr: ""
Jan 26 21:39:55.612: INFO: stdout: "true"
Jan 26 21:39:55.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-469e4fe054ff47324f805f92000ee42c-d47wf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1818'
Jan 26 21:39:55.756: INFO: stderr: ""
Jan 26 21:39:55.756: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine"
Jan 26 21:39:55.756: INFO: e2e-test-httpd-rc-469e4fe054ff47324f805f92000ee42c-d47wf is verified up and running
[AfterEach] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1678
Jan 26 21:39:55.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-1818'
Jan 26 21:39:55.899: INFO: stderr: ""
Jan 26 21:39:55.899: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:39:55.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1818" for this suite.

• [SLOW TEST:22.357 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1667
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image  [Conformance]","total":278,"completed":112,"skipped":1742,"failed":0}
SSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:39:55.912: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Jan 26 21:39:56.030: INFO: Waiting up to 5m0s for pod "downward-api-52c0ae60-7e1b-4f4c-9f8a-3d5ecd7b27b9" in namespace "downward-api-6151" to be "success or failure"
Jan 26 21:39:56.085: INFO: Pod "downward-api-52c0ae60-7e1b-4f4c-9f8a-3d5ecd7b27b9": Phase="Pending", Reason="", readiness=false. Elapsed: 55.512301ms
Jan 26 21:39:58.099: INFO: Pod "downward-api-52c0ae60-7e1b-4f4c-9f8a-3d5ecd7b27b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069317283s
Jan 26 21:40:00.111: INFO: Pod "downward-api-52c0ae60-7e1b-4f4c-9f8a-3d5ecd7b27b9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080619126s
Jan 26 21:40:02.120: INFO: Pod "downward-api-52c0ae60-7e1b-4f4c-9f8a-3d5ecd7b27b9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090200641s
Jan 26 21:40:04.128: INFO: Pod "downward-api-52c0ae60-7e1b-4f4c-9f8a-3d5ecd7b27b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.09789768s
STEP: Saw pod success
Jan 26 21:40:04.128: INFO: Pod "downward-api-52c0ae60-7e1b-4f4c-9f8a-3d5ecd7b27b9" satisfied condition "success or failure"
Jan 26 21:40:04.131: INFO: Trying to get logs from node jerma-node pod downward-api-52c0ae60-7e1b-4f4c-9f8a-3d5ecd7b27b9 container dapi-container: 
STEP: delete the pod
Jan 26 21:40:04.201: INFO: Waiting for pod downward-api-52c0ae60-7e1b-4f4c-9f8a-3d5ecd7b27b9 to disappear
Jan 26 21:40:04.249: INFO: Pod downward-api-52c0ae60-7e1b-4f4c-9f8a-3d5ecd7b27b9 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:40:04.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6151" for this suite.

• [SLOW TEST:8.418 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":113,"skipped":1746,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:40:04.331: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating all guestbook components
Jan 26 21:40:04.414: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Jan 26 21:40:04.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6439'
Jan 26 21:40:04.972: INFO: stderr: ""
Jan 26 21:40:04.972: INFO: stdout: "service/agnhost-slave created\n"
Jan 26 21:40:04.987: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Jan 26 21:40:04.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6439'
Jan 26 21:40:05.510: INFO: stderr: ""
Jan 26 21:40:05.510: INFO: stdout: "service/agnhost-master created\n"
Jan 26 21:40:05.511: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jan 26 21:40:05.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6439'
Jan 26 21:40:05.991: INFO: stderr: ""
Jan 26 21:40:05.991: INFO: stdout: "service/frontend created\n"
Jan 26 21:40:05.992: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Jan 26 21:40:05.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6439'
Jan 26 21:40:06.381: INFO: stderr: ""
Jan 26 21:40:06.382: INFO: stdout: "deployment.apps/frontend created\n"
Jan 26 21:40:06.383: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan 26 21:40:06.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6439'
Jan 26 21:40:06.828: INFO: stderr: ""
Jan 26 21:40:06.828: INFO: stdout: "deployment.apps/agnhost-master created\n"
Jan 26 21:40:06.829: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan 26 21:40:06.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6439'
Jan 26 21:40:07.375: INFO: stderr: ""
Jan 26 21:40:07.375: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Jan 26 21:40:07.376: INFO: Waiting for all frontend pods to be Running.
Jan 26 21:40:27.430: INFO: Waiting for frontend to serve content.
Jan 26 21:40:27.455: INFO: Trying to add a new entry to the guestbook.
Jan 26 21:40:27.478: INFO: Verifying that added entry can be retrieved.
Jan 26 21:40:27.491: INFO: Failed to get response from guestbook. err: , response: {"data":""}
STEP: using delete to clean up resources
Jan 26 21:40:32.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6439'
Jan 26 21:40:32.720: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 26 21:40:32.720: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Jan 26 21:40:32.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6439'
Jan 26 21:40:32.881: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 26 21:40:32.881: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Jan 26 21:40:32.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6439'
Jan 26 21:40:33.209: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 26 21:40:33.209: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan 26 21:40:33.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6439'
Jan 26 21:40:33.319: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 26 21:40:33.319: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan 26 21:40:33.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6439'
Jan 26 21:40:33.458: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 26 21:40:33.458: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Jan 26 21:40:33.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6439'
Jan 26 21:40:33.615: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 26 21:40:33.616: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:40:33.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6439" for this suite.

• [SLOW TEST:29.338 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:385
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":278,"completed":114,"skipped":1760,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:40:33.671: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 26 21:40:36.159: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 26 21:40:39.374: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715671636, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715671636, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715671636, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715671636, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 21:40:41.404: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715671636, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715671636, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715671636, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715671636, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 21:40:43.387: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715671636, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715671636, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715671636, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715671636, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 21:40:45.382: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715671636, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715671636, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715671636, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715671636, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 21:40:48.574: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715671636, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715671636, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715671636, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715671636, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 26 21:40:51.072: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 26 21:40:51.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5223-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:40:52.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6867" for this suite.
STEP: Destroying namespace "webhook-6867-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:18.927 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":115,"skipped":1770,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:40:52.600: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 26 21:40:52.769: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0bcf1e8d-923f-4de7-a24d-4c7725ff32ae" in namespace "downward-api-3526" to be "success or failure"
Jan 26 21:40:52.776: INFO: Pod "downwardapi-volume-0bcf1e8d-923f-4de7-a24d-4c7725ff32ae": Phase="Pending", Reason="", readiness=false. Elapsed: 6.707331ms
Jan 26 21:40:54.797: INFO: Pod "downwardapi-volume-0bcf1e8d-923f-4de7-a24d-4c7725ff32ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02803561s
Jan 26 21:40:56.808: INFO: Pod "downwardapi-volume-0bcf1e8d-923f-4de7-a24d-4c7725ff32ae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038278197s
Jan 26 21:40:58.814: INFO: Pod "downwardapi-volume-0bcf1e8d-923f-4de7-a24d-4c7725ff32ae": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044240815s
Jan 26 21:41:00.823: INFO: Pod "downwardapi-volume-0bcf1e8d-923f-4de7-a24d-4c7725ff32ae": Phase="Pending", Reason="", readiness=false. Elapsed: 8.053707894s
Jan 26 21:41:02.840: INFO: Pod "downwardapi-volume-0bcf1e8d-923f-4de7-a24d-4c7725ff32ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.070604667s
STEP: Saw pod success
Jan 26 21:41:02.840: INFO: Pod "downwardapi-volume-0bcf1e8d-923f-4de7-a24d-4c7725ff32ae" satisfied condition "success or failure"
Jan 26 21:41:02.847: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-0bcf1e8d-923f-4de7-a24d-4c7725ff32ae container client-container: 
STEP: delete the pod
Jan 26 21:41:02.938: INFO: Waiting for pod downwardapi-volume-0bcf1e8d-923f-4de7-a24d-4c7725ff32ae to disappear
Jan 26 21:41:02.943: INFO: Pod downwardapi-volume-0bcf1e8d-923f-4de7-a24d-4c7725ff32ae no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:41:02.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3526" for this suite.

• [SLOW TEST:10.432 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":116,"skipped":1843,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:41:03.032: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 26 21:41:09.459: INFO: Waiting up to 5m0s for pod "client-envvars-4fc73a25-eb07-4bef-988f-351c4d26a679" in namespace "pods-6584" to be "success or failure"
Jan 26 21:41:09.464: INFO: Pod "client-envvars-4fc73a25-eb07-4bef-988f-351c4d26a679": Phase="Pending", Reason="", readiness=false. Elapsed: 4.823057ms
Jan 26 21:41:11.477: INFO: Pod "client-envvars-4fc73a25-eb07-4bef-988f-351c4d26a679": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016981871s
Jan 26 21:41:13.485: INFO: Pod "client-envvars-4fc73a25-eb07-4bef-988f-351c4d26a679": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025240551s
Jan 26 21:41:15.492: INFO: Pod "client-envvars-4fc73a25-eb07-4bef-988f-351c4d26a679": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032104045s
Jan 26 21:41:17.501: INFO: Pod "client-envvars-4fc73a25-eb07-4bef-988f-351c4d26a679": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.041120038s
STEP: Saw pod success
Jan 26 21:41:17.501: INFO: Pod "client-envvars-4fc73a25-eb07-4bef-988f-351c4d26a679" satisfied condition "success or failure"
Jan 26 21:41:17.506: INFO: Trying to get logs from node jerma-node pod client-envvars-4fc73a25-eb07-4bef-988f-351c4d26a679 container env3cont: 
STEP: delete the pod
Jan 26 21:41:17.539: INFO: Waiting for pod client-envvars-4fc73a25-eb07-4bef-988f-351c4d26a679 to disappear
Jan 26 21:41:17.547: INFO: Pod client-envvars-4fc73a25-eb07-4bef-988f-351c4d26a679 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:41:17.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6584" for this suite.

• [SLOW TEST:14.534 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":117,"skipped":1850,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:41:17.568: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 26 21:41:17.836: INFO: Waiting up to 5m0s for pod "pod-7d0baca6-ab66-4822-83e1-9b1e51235fb5" in namespace "emptydir-8549" to be "success or failure"
Jan 26 21:41:17.840: INFO: Pod "pod-7d0baca6-ab66-4822-83e1-9b1e51235fb5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.706161ms
Jan 26 21:41:19.849: INFO: Pod "pod-7d0baca6-ab66-4822-83e1-9b1e51235fb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013343319s
Jan 26 21:41:21.862: INFO: Pod "pod-7d0baca6-ab66-4822-83e1-9b1e51235fb5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025901632s
Jan 26 21:41:23.878: INFO: Pod "pod-7d0baca6-ab66-4822-83e1-9b1e51235fb5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042396633s
Jan 26 21:41:25.892: INFO: Pod "pod-7d0baca6-ab66-4822-83e1-9b1e51235fb5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05587083s
STEP: Saw pod success
Jan 26 21:41:25.892: INFO: Pod "pod-7d0baca6-ab66-4822-83e1-9b1e51235fb5" satisfied condition "success or failure"
Jan 26 21:41:25.895: INFO: Trying to get logs from node jerma-node pod pod-7d0baca6-ab66-4822-83e1-9b1e51235fb5 container test-container: 
STEP: delete the pod
Jan 26 21:41:25.963: INFO: Waiting for pod pod-7d0baca6-ab66-4822-83e1-9b1e51235fb5 to disappear
Jan 26 21:41:25.981: INFO: Pod pod-7d0baca6-ab66-4822-83e1-9b1e51235fb5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:41:25.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8549" for this suite.

• [SLOW TEST:8.423 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":118,"skipped":1867,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:41:25.992: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1362
STEP: creating the pod
Jan 26 21:41:26.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7393'
Jan 26 21:41:26.725: INFO: stderr: ""
Jan 26 21:41:26.725: INFO: stdout: "pod/pause created\n"
Jan 26 21:41:26.726: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jan 26 21:41:26.726: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-7393" to be "running and ready"
Jan 26 21:41:26.738: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 12.438587ms
Jan 26 21:41:28.752: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026342141s
Jan 26 21:41:30.760: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034398756s
Jan 26 21:41:32.769: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043039579s
Jan 26 21:41:34.783: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.057240161s
Jan 26 21:41:34.783: INFO: Pod "pause" satisfied condition "running and ready"
Jan 26 21:41:34.783: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: adding the label testing-label with value testing-label-value to a pod
Jan 26 21:41:34.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-7393'
Jan 26 21:41:34.973: INFO: stderr: ""
Jan 26 21:41:34.973: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jan 26 21:41:34.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7393'
Jan 26 21:41:35.080: INFO: stderr: ""
Jan 26 21:41:35.080: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Jan 26 21:41:35.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-7393'
Jan 26 21:41:35.230: INFO: stderr: ""
Jan 26 21:41:35.230: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jan 26 21:41:35.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7393'
Jan 26 21:41:35.342: INFO: stderr: ""
Jan 26 21:41:35.342: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    \n"
[AfterEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1369
STEP: using delete to clean up resources
Jan 26 21:41:35.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7393'
Jan 26 21:41:35.469: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 26 21:41:35.469: INFO: stdout: "pod \"pause\" force deleted\n"
Jan 26 21:41:35.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-7393'
Jan 26 21:41:35.657: INFO: stderr: "No resources found in kubectl-7393 namespace.\n"
Jan 26 21:41:35.657: INFO: stdout: ""
Jan 26 21:41:35.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-7393 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 26 21:41:35.779: INFO: stderr: ""
Jan 26 21:41:35.779: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:41:35.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7393" for this suite.

• [SLOW TEST:9.803 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1359
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":278,"completed":119,"skipped":1880,"failed":0}
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:41:35.795: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-9743
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-9743
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9743
Jan 26 21:41:36.070: INFO: Found 0 stateful pods, waiting for 1
Jan 26 21:41:46.077: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jan 26 21:41:46.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9743 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 26 21:41:46.605: INFO: stderr: "I0126 21:41:46.337868    2853 log.go:172] (0xc0009426e0) (0xc000bac000) Create stream\nI0126 21:41:46.338067    2853 log.go:172] (0xc0009426e0) (0xc000bac000) Stream added, broadcasting: 1\nI0126 21:41:46.345508    2853 log.go:172] (0xc0009426e0) Reply frame received for 1\nI0126 21:41:46.345598    2853 log.go:172] (0xc0009426e0) (0xc0006abae0) Create stream\nI0126 21:41:46.345616    2853 log.go:172] (0xc0009426e0) (0xc0006abae0) Stream added, broadcasting: 3\nI0126 21:41:46.365127    2853 log.go:172] (0xc0009426e0) Reply frame received for 3\nI0126 21:41:46.365476    2853 log.go:172] (0xc0009426e0) (0xc0004ee000) Create stream\nI0126 21:41:46.365534    2853 log.go:172] (0xc0009426e0) (0xc0004ee000) Stream added, broadcasting: 5\nI0126 21:41:46.367804    2853 log.go:172] (0xc0009426e0) Reply frame received for 5\nI0126 21:41:46.450145    2853 log.go:172] (0xc0009426e0) Data frame received for 5\nI0126 21:41:46.450238    2853 log.go:172] (0xc0004ee000) (5) Data frame handling\nI0126 21:41:46.450273    2853 log.go:172] (0xc0004ee000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0126 21:41:46.473507    2853 log.go:172] (0xc0009426e0) Data frame received for 3\nI0126 21:41:46.473611    2853 log.go:172] (0xc0006abae0) (3) Data frame handling\nI0126 21:41:46.473638    2853 log.go:172] (0xc0006abae0) (3) Data frame sent\nI0126 21:41:46.578630    2853 log.go:172] (0xc0009426e0) Data frame received for 1\nI0126 21:41:46.579399    2853 log.go:172] (0xc0009426e0) (0xc0004ee000) Stream removed, broadcasting: 5\nI0126 21:41:46.579790    2853 log.go:172] (0xc000bac000) (1) Data frame handling\nI0126 21:41:46.580113    2853 log.go:172] (0xc000bac000) (1) Data frame sent\nI0126 21:41:46.580262    2853 log.go:172] (0xc0009426e0) (0xc0006abae0) Stream removed, broadcasting: 3\nI0126 21:41:46.580399    2853 log.go:172] (0xc0009426e0) (0xc000bac000) Stream removed, broadcasting: 1\nI0126 21:41:46.580428    2853 log.go:172] (0xc0009426e0) Go away received\nI0126 21:41:46.582753    2853 log.go:172] (0xc0009426e0) (0xc000bac000) Stream removed, broadcasting: 1\nI0126 21:41:46.582814    2853 log.go:172] (0xc0009426e0) (0xc0006abae0) Stream removed, broadcasting: 3\nI0126 21:41:46.582831    2853 log.go:172] (0xc0009426e0) (0xc0004ee000) Stream removed, broadcasting: 5\n"
Jan 26 21:41:46.605: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 26 21:41:46.605: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 26 21:41:46.615: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 26 21:41:56.628: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 26 21:41:56.628: INFO: Waiting for statefulset status.replicas updated to 0
Jan 26 21:41:56.660: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999407s
Jan 26 21:41:57.671: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.98634392s
Jan 26 21:41:58.702: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.975197788s
Jan 26 21:41:59.716: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.943849806s
Jan 26 21:42:00.802: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.929927608s
Jan 26 21:42:01.812: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.84403958s
Jan 26 21:42:02.819: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.83417711s
Jan 26 21:42:03.828: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.826277792s
Jan 26 21:42:04.835: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.818626474s
Jan 26 21:42:05.844: INFO: Verifying statefulset ss doesn't scale past 1 for another 811.47713ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9743
Jan 26 21:42:06.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9743 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 26 21:42:07.910: INFO: stderr: "I0126 21:42:07.095626    2872 log.go:172] (0xc00099d1e0) (0xc000bc0280) Create stream\nI0126 21:42:07.095791    2872 log.go:172] (0xc00099d1e0) (0xc000bc0280) Stream added, broadcasting: 1\nI0126 21:42:07.099393    2872 log.go:172] (0xc00099d1e0) Reply frame received for 1\nI0126 21:42:07.099491    2872 log.go:172] (0xc00099d1e0) (0xc0009c0000) Create stream\nI0126 21:42:07.099509    2872 log.go:172] (0xc00099d1e0) (0xc0009c0000) Stream added, broadcasting: 3\nI0126 21:42:07.100530    2872 log.go:172] (0xc00099d1e0) Reply frame received for 3\nI0126 21:42:07.100550    2872 log.go:172] (0xc00099d1e0) (0xc00098c000) Create stream\nI0126 21:42:07.100557    2872 log.go:172] (0xc00099d1e0) (0xc00098c000) Stream added, broadcasting: 5\nI0126 21:42:07.102224    2872 log.go:172] (0xc00099d1e0) Reply frame received for 5\nI0126 21:42:07.761335    2872 log.go:172] (0xc00099d1e0) Data frame received for 5\nI0126 21:42:07.761579    2872 log.go:172] (0xc00098c000) (5) Data frame handling\nI0126 21:42:07.761612    2872 log.go:172] (0xc00098c000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0126 21:42:07.763075    2872 log.go:172] (0xc00099d1e0) Data frame received for 3\nI0126 21:42:07.763111    2872 log.go:172] (0xc0009c0000) (3) Data frame handling\nI0126 21:42:07.763163    2872 log.go:172] (0xc0009c0000) (3) Data frame sent\nI0126 21:42:07.884803    2872 log.go:172] (0xc00099d1e0) Data frame received for 1\nI0126 21:42:07.885322    2872 log.go:172] (0xc00099d1e0) (0xc0009c0000) Stream removed, broadcasting: 3\nI0126 21:42:07.885386    2872 log.go:172] (0xc000bc0280) (1) Data frame handling\nI0126 21:42:07.885467    2872 log.go:172] (0xc000bc0280) (1) Data frame sent\nI0126 21:42:07.885511    2872 log.go:172] (0xc00099d1e0) (0xc00098c000) Stream removed, broadcasting: 5\nI0126 21:42:07.885679    2872 log.go:172] (0xc00099d1e0) (0xc000bc0280) Stream removed, broadcasting: 1\nI0126 21:42:07.885720    2872 log.go:172] (0xc00099d1e0) Go away received\nI0126 21:42:07.893807    2872 log.go:172] (0xc00099d1e0) (0xc000bc0280) Stream removed, broadcasting: 1\nI0126 21:42:07.894285    2872 log.go:172] (0xc00099d1e0) (0xc0009c0000) Stream removed, broadcasting: 3\nI0126 21:42:07.894376    2872 log.go:172] (0xc00099d1e0) (0xc00098c000) Stream removed, broadcasting: 5\n"
Jan 26 21:42:07.911: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 26 21:42:07.911: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 26 21:42:07.971: INFO: Found 1 stateful pods, waiting for 3
Jan 26 21:42:17.983: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 26 21:42:17.983: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 26 21:42:17.983: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 26 21:42:27.985: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 26 21:42:27.985: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 26 21:42:27.985: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jan 26 21:42:27.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9743 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 26 21:42:28.430: INFO: stderr: "I0126 21:42:28.226786    2893 log.go:172] (0xc000a78420) (0xc0009ac000) Create stream\nI0126 21:42:28.227699    2893 log.go:172] (0xc000a78420) (0xc0009ac000) Stream added, broadcasting: 1\nI0126 21:42:28.239536    2893 log.go:172] (0xc000a78420) Reply frame received for 1\nI0126 21:42:28.239648    2893 log.go:172] (0xc000a78420) (0xc00060c5a0) Create stream\nI0126 21:42:28.239657    2893 log.go:172] (0xc000a78420) (0xc00060c5a0) Stream added, broadcasting: 3\nI0126 21:42:28.243133    2893 log.go:172] (0xc000a78420) Reply frame received for 3\nI0126 21:42:28.243172    2893 log.go:172] (0xc000a78420) (0xc000419360) Create stream\nI0126 21:42:28.243181    2893 log.go:172] (0xc000a78420) (0xc000419360) Stream added, broadcasting: 5\nI0126 21:42:28.244578    2893 log.go:172] (0xc000a78420) Reply frame received for 5\nI0126 21:42:28.327935    2893 log.go:172] (0xc000a78420) Data frame received for 5\nI0126 21:42:28.328302    2893 log.go:172] (0xc000419360) (5) Data frame handling\nI0126 21:42:28.328390    2893 log.go:172] (0xc000419360) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0126 21:42:28.328660    2893 log.go:172] (0xc000a78420) Data frame received for 3\nI0126 21:42:28.328705    2893 log.go:172] (0xc00060c5a0) (3) Data frame handling\nI0126 21:42:28.328755    2893 log.go:172] (0xc00060c5a0) (3) Data frame sent\nI0126 21:42:28.414834    2893 log.go:172] (0xc000a78420) Data frame received for 1\nI0126 21:42:28.414987    2893 log.go:172] (0xc000a78420) (0xc000419360) Stream removed, broadcasting: 5\nI0126 21:42:28.415091    2893 log.go:172] (0xc0009ac000) (1) Data frame handling\nI0126 21:42:28.415127    2893 log.go:172] (0xc0009ac000) (1) Data frame sent\nI0126 21:42:28.415145    2893 log.go:172] (0xc000a78420) (0xc00060c5a0) Stream removed, broadcasting: 3\nI0126 21:42:28.415244    2893 log.go:172] (0xc000a78420) (0xc0009ac000) Stream removed, broadcasting: 1\nI0126 21:42:28.415281    2893 log.go:172] (0xc000a78420) Go away received\nI0126 21:42:28.417297    2893 log.go:172] (0xc000a78420) (0xc0009ac000) Stream removed, broadcasting: 1\nI0126 21:42:28.417428    2893 log.go:172] (0xc000a78420) (0xc00060c5a0) Stream removed, broadcasting: 3\nI0126 21:42:28.417446    2893 log.go:172] (0xc000a78420) (0xc000419360) Stream removed, broadcasting: 5\n"
Jan 26 21:42:28.430: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 26 21:42:28.430: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 26 21:42:28.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9743 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 26 21:42:28.992: INFO: stderr: "I0126 21:42:28.637241    2914 log.go:172] (0xc0008c0000) (0xc0008b2000) Create stream\nI0126 21:42:28.637687    2914 log.go:172] (0xc0008c0000) (0xc0008b2000) Stream added, broadcasting: 1\nI0126 21:42:28.648829    2914 log.go:172] (0xc0008c0000) Reply frame received for 1\nI0126 21:42:28.648992    2914 log.go:172] (0xc0008c0000) (0xc000ada140) Create stream\nI0126 21:42:28.649012    2914 log.go:172] (0xc0008c0000) (0xc000ada140) Stream added, broadcasting: 3\nI0126 21:42:28.650025    2914 log.go:172] (0xc0008c0000) Reply frame received for 3\nI0126 21:42:28.650083    2914 log.go:172] (0xc0008c0000) (0xc0008b41e0) Create stream\nI0126 21:42:28.650092    2914 log.go:172] (0xc0008c0000) (0xc0008b41e0) Stream added, broadcasting: 5\nI0126 21:42:28.656939    2914 log.go:172] (0xc0008c0000) Reply frame received for 5\nI0126 21:42:28.786189    2914 log.go:172] (0xc0008c0000) Data frame received for 5\nI0126 21:42:28.786379    2914 log.go:172] (0xc0008b41e0) (5) Data frame handling\nI0126 21:42:28.786399    2914 log.go:172] (0xc0008b41e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0126 21:42:28.900599    2914 log.go:172] (0xc0008c0000) Data frame received for 3\nI0126 21:42:28.900709    2914 log.go:172] (0xc000ada140) (3) Data frame handling\nI0126 21:42:28.900739    2914 log.go:172] (0xc000ada140) (3) Data frame sent\nI0126 21:42:28.985559    2914 log.go:172] (0xc0008c0000) (0xc0008b41e0) Stream removed, broadcasting: 5\nI0126 21:42:28.985704    2914 log.go:172] (0xc0008c0000) Data frame received for 1\nI0126 21:42:28.985741    2914 log.go:172] (0xc0008c0000) (0xc000ada140) Stream removed, broadcasting: 3\nI0126 21:42:28.985803    2914 log.go:172] (0xc0008b2000) (1) Data frame handling\nI0126 21:42:28.985828    2914 log.go:172] (0xc0008b2000) (1) Data frame sent\nI0126 21:42:28.985847    2914 log.go:172] (0xc0008c0000) (0xc0008b2000) Stream removed, broadcasting: 1\nI0126 21:42:28.985862    2914 log.go:172] (0xc0008c0000) Go away received\nI0126 21:42:28.986438    2914 log.go:172] (0xc0008c0000) (0xc0008b2000) Stream removed, broadcasting: 1\nI0126 21:42:28.986470    2914 log.go:172] (0xc0008c0000) (0xc000ada140) Stream removed, broadcasting: 3\nI0126 21:42:28.986478    2914 log.go:172] (0xc0008c0000) (0xc0008b41e0) Stream removed, broadcasting: 5\n"
Jan 26 21:42:28.992: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 26 21:42:28.992: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 26 21:42:28.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9743 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 26 21:42:29.385: INFO: stderr: "I0126 21:42:29.174120    2934 log.go:172] (0xc000a90dc0) (0xc000aac320) Create stream\nI0126 21:42:29.174366    2934 log.go:172] (0xc000a90dc0) (0xc000aac320) Stream added, broadcasting: 1\nI0126 21:42:29.177345    2934 log.go:172] (0xc000a90dc0) Reply frame received for 1\nI0126 21:42:29.177466    2934 log.go:172] (0xc000a90dc0) (0xc0009a8140) Create stream\nI0126 21:42:29.177486    2934 log.go:172] (0xc000a90dc0) (0xc0009a8140) Stream added, broadcasting: 3\nI0126 21:42:29.178824    2934 log.go:172] (0xc000a90dc0) Reply frame received for 3\nI0126 21:42:29.178877    2934 log.go:172] (0xc000a90dc0) (0xc000a74000) Create stream\nI0126 21:42:29.178908    2934 log.go:172] (0xc000a90dc0) (0xc000a74000) Stream added, broadcasting: 5\nI0126 21:42:29.180228    2934 log.go:172] (0xc000a90dc0) Reply frame received for 5\nI0126 21:42:29.260718    2934 log.go:172] (0xc000a90dc0) Data frame received for 5\nI0126 21:42:29.260870    2934 log.go:172] (0xc000a74000) (5) Data frame handling\nI0126 21:42:29.260903    2934 log.go:172] (0xc000a74000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0126 21:42:29.320779    2934 log.go:172] (0xc000a90dc0) Data frame received for 3\nI0126 21:42:29.320842    2934 log.go:172] (0xc0009a8140) (3) Data frame handling\nI0126 21:42:29.320863    2934 log.go:172] (0xc0009a8140) (3) Data frame sent\nI0126 21:42:29.378716    2934 log.go:172] (0xc000a90dc0) Data frame received for 1\nI0126 21:42:29.378796    2934 log.go:172] (0xc000aac320) (1) Data frame handling\nI0126 21:42:29.378822    2934 log.go:172] (0xc000aac320) (1) Data frame sent\nI0126 21:42:29.378984    2934 log.go:172] (0xc000a90dc0) (0xc000a74000) Stream removed, broadcasting: 5\nI0126 21:42:29.379080    2934 log.go:172] (0xc000a90dc0) (0xc000aac320) Stream removed, broadcasting: 1\nI0126 21:42:29.379756    2934 log.go:172] (0xc000a90dc0) (0xc0009a8140) Stream removed, broadcasting: 3\nI0126 21:42:29.379831    2934 log.go:172] (0xc000a90dc0) (0xc000aac320) Stream removed, broadcasting: 1\nI0126 21:42:29.379910    2934 log.go:172] (0xc000a90dc0) Go away received\nI0126 21:42:29.379942    2934 log.go:172] (0xc000a90dc0) (0xc0009a8140) Stream removed, broadcasting: 3\nI0126 21:42:29.379956    2934 log.go:172] (0xc000a90dc0) (0xc000a74000) Stream removed, broadcasting: 5\n"
Jan 26 21:42:29.385: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 26 21:42:29.385: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 26 21:42:29.385: INFO: Waiting for statefulset status.replicas updated to 0
Jan 26 21:42:29.390: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jan 26 21:42:39.407: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 26 21:42:39.407: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 26 21:42:39.407: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 26 21:42:39.457: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999432s
Jan 26 21:42:40.468: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993143065s
Jan 26 21:42:41.486: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.982845946s
Jan 26 21:42:42.512: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.964218386s
Jan 26 21:42:43.519: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.938872482s
Jan 26 21:42:44.529: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.931360813s
Jan 26 21:42:46.103: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.92172733s
Jan 26 21:42:47.112: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.347141112s
Jan 26 21:42:48.122: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.339129604s
Jan 26 21:42:49.134: INFO: Verifying statefulset ss doesn't scale past 3 for another 329.134051ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9743
Jan 26 21:42:50.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9743 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 26 21:42:50.563: INFO: stderr: "I0126 21:42:50.345036    2954 log.go:172] (0xc000a3e630) (0xc0005f1e00) Create stream\nI0126 21:42:50.345309    2954 log.go:172] (0xc000a3e630) (0xc0005f1e00) Stream added, broadcasting: 1\nI0126 21:42:50.349076    2954 log.go:172] (0xc000a3e630) Reply frame received for 1\nI0126 21:42:50.349113    2954 log.go:172] (0xc000a3e630) (0xc000a1a000) Create stream\nI0126 21:42:50.349124    2954 log.go:172] (0xc000a3e630) (0xc000a1a000) Stream added, broadcasting: 3\nI0126 21:42:50.350511    2954 log.go:172] (0xc000a3e630) Reply frame received for 3\nI0126 21:42:50.350539    2954 log.go:172] (0xc000a3e630) (0xc0005f1ea0) Create stream\nI0126 21:42:50.350565    2954 log.go:172] (0xc000a3e630) (0xc0005f1ea0) Stream added, broadcasting: 5\nI0126 21:42:50.351946    2954 log.go:172] (0xc000a3e630) Reply frame received for 5\nI0126 21:42:50.442670    2954 log.go:172] (0xc000a3e630) Data frame received for 5\nI0126 21:42:50.442837    2954 log.go:172] (0xc0005f1ea0) (5) Data frame handling\nI0126 21:42:50.442862    2954 log.go:172] (0xc0005f1ea0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0126 21:42:50.442916    2954 log.go:172] (0xc000a3e630) Data frame received for 3\nI0126 21:42:50.442930    2954 log.go:172] (0xc000a1a000) (3) Data frame handling\nI0126 21:42:50.442952    2954 log.go:172] (0xc000a1a000) (3) Data frame sent\nI0126 21:42:50.542086    2954 log.go:172] (0xc000a3e630) (0xc0005f1ea0) Stream removed, broadcasting: 5\nI0126 21:42:50.542661    2954 log.go:172] (0xc000a3e630) (0xc000a1a000) Stream removed, broadcasting: 3\nI0126 21:42:50.542813    2954 log.go:172] (0xc000a3e630) Data frame received for 1\nI0126 21:42:50.542935    2954 log.go:172] (0xc0005f1e00) (1) Data frame handling\nI0126 21:42:50.542983    2954 log.go:172] (0xc0005f1e00) (1) Data frame sent\nI0126 21:42:50.543365    2954 log.go:172] (0xc000a3e630) (0xc0005f1e00) Stream removed, broadcasting: 1\nI0126 21:42:50.543416    2954 log.go:172] (0xc000a3e630) Go away received\nI0126 21:42:50.545265    2954 log.go:172] (0xc000a3e630) (0xc0005f1e00) Stream removed, broadcasting: 1\nI0126 21:42:50.545291    2954 log.go:172] (0xc000a3e630) (0xc000a1a000) Stream removed, broadcasting: 3\nI0126 21:42:50.545306    2954 log.go:172] (0xc000a3e630) (0xc0005f1ea0) Stream removed, broadcasting: 5\n"
Jan 26 21:42:50.563: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 26 21:42:50.563: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 26 21:42:50.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9743 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 26 21:42:51.022: INFO: stderr: "I0126 21:42:50.818105    2974 log.go:172] (0xc000b3cfd0) (0xc000b2c320) Create stream\nI0126 21:42:50.818287    2974 log.go:172] (0xc000b3cfd0) (0xc000b2c320) Stream added, broadcasting: 1\nI0126 21:42:50.828491    2974 log.go:172] (0xc000b3cfd0) Reply frame received for 1\nI0126 21:42:50.828518    2974 log.go:172] (0xc000b3cfd0) (0xc000a5c0a0) Create stream\nI0126 21:42:50.828529    2974 log.go:172] (0xc000b3cfd0) (0xc000a5c0a0) Stream added, broadcasting: 3\nI0126 21:42:50.829411    2974 log.go:172] (0xc000b3cfd0) Reply frame received for 3\nI0126 21:42:50.829439    2974 log.go:172] (0xc000b3cfd0) (0xc000a360a0) Create stream\nI0126 21:42:50.829448    2974 log.go:172] (0xc000b3cfd0) (0xc000a360a0) Stream added, broadcasting: 5\nI0126 21:42:50.830948    2974 log.go:172] (0xc000b3cfd0) Reply frame received for 5\nI0126 21:42:50.915725    2974 log.go:172] (0xc000b3cfd0) Data frame received for 5\nI0126 21:42:50.915838    2974 log.go:172] (0xc000a360a0) (5) Data frame handling\nI0126 21:42:50.915859    2974 log.go:172] (0xc000a360a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0126 21:42:50.915970    2974 log.go:172] (0xc000b3cfd0) Data frame received for 3\nI0126 21:42:50.916060    2974 log.go:172] (0xc000a5c0a0) (3) Data frame handling\nI0126 21:42:50.916080    2974 log.go:172] (0xc000a5c0a0) (3) Data frame sent\nI0126 21:42:51.011572    2974 log.go:172] (0xc000b3cfd0) Data frame received for 1\nI0126 21:42:51.011636    2974 log.go:172] (0xc000b3cfd0) (0xc000a5c0a0) Stream removed, broadcasting: 3\nI0126 21:42:51.011674    2974 log.go:172] (0xc000b3cfd0) (0xc000a360a0) Stream removed, broadcasting: 5\nI0126 21:42:51.011771    2974 log.go:172] (0xc000b2c320) (1) Data frame handling\nI0126 21:42:51.011810    2974 log.go:172] (0xc000b2c320) (1) Data frame sent\nI0126 21:42:51.011822    2974 log.go:172] (0xc000b3cfd0) (0xc000b2c320) Stream removed, broadcasting: 1\nI0126 21:42:51.011843    2974 log.go:172] (0xc000b3cfd0) Go away received\nI0126 21:42:51.013911    2974 log.go:172] (0xc000b3cfd0) (0xc000b2c320) Stream removed, broadcasting: 1\nI0126 21:42:51.014044    2974 log.go:172] (0xc000b3cfd0) (0xc000a5c0a0) Stream removed, broadcasting: 3\nI0126 21:42:51.014057    2974 log.go:172] (0xc000b3cfd0) (0xc000a360a0) Stream removed, broadcasting: 5\n"
Jan 26 21:42:51.022: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 26 21:42:51.022: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 26 21:42:51.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9743 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 26 21:42:51.353: INFO: stderr: "I0126 21:42:51.194437    2994 log.go:172] (0xc000970a50) (0xc0009c6000) Create stream\nI0126 21:42:51.194728    2994 log.go:172] (0xc000970a50) (0xc0009c6000) Stream added, broadcasting: 1\nI0126 21:42:51.199195    2994 log.go:172] (0xc000970a50) Reply frame received for 1\nI0126 21:42:51.199278    2994 log.go:172] (0xc000970a50) (0xc000a2a000) Create stream\nI0126 21:42:51.199294    2994 log.go:172] (0xc000970a50) (0xc000a2a000) Stream added, broadcasting: 3\nI0126 21:42:51.200609    2994 log.go:172] (0xc000970a50) Reply frame received for 3\nI0126 21:42:51.200724    2994 log.go:172] (0xc000970a50) (0xc000655900) Create stream\nI0126 21:42:51.200745    2994 log.go:172] (0xc000970a50) (0xc000655900) Stream added, broadcasting: 5\nI0126 21:42:51.202432    2994 log.go:172] (0xc000970a50) Reply frame received for 5\nI0126 21:42:51.264735    2994 log.go:172] (0xc000970a50) Data frame received for 5\nI0126 21:42:51.264951    2994 log.go:172] (0xc000655900) (5) Data frame handling\nI0126 21:42:51.264975    2994 log.go:172] (0xc000655900) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0126 21:42:51.265026    2994 log.go:172] (0xc000970a50) Data frame received for 3\nI0126 21:42:51.265042    2994 log.go:172] (0xc000a2a000) (3) Data frame handling\nI0126 21:42:51.265054    2994 log.go:172] (0xc000a2a000) (3) Data frame sent\nI0126 21:42:51.343129    2994 log.go:172] (0xc000970a50) Data frame received for 1\nI0126 21:42:51.343225    2994 log.go:172] (0xc0009c6000) (1) Data frame handling\nI0126 21:42:51.343264    2994 log.go:172] (0xc0009c6000) (1) Data frame sent\nI0126 21:42:51.343285    2994 log.go:172] (0xc000970a50) (0xc0009c6000) Stream removed, broadcasting: 1\nI0126 21:42:51.343389    2994 log.go:172] (0xc000970a50) (0xc000a2a000) Stream removed, broadcasting: 3\nI0126 21:42:51.343634    2994 log.go:172] (0xc000970a50) (0xc000655900) Stream removed, broadcasting: 5\nI0126 21:42:51.343752    2994 log.go:172] (0xc000970a50) Go away received\nI0126 21:42:51.343978    2994 log.go:172] (0xc000970a50) (0xc0009c6000) Stream removed, broadcasting: 1\nI0126 21:42:51.344031    2994 log.go:172] (0xc000970a50) (0xc000a2a000) Stream removed, broadcasting: 3\nI0126 21:42:51.344066    2994 log.go:172] (0xc000970a50) (0xc000655900) Stream removed, broadcasting: 5\n"
Jan 26 21:42:51.353: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 26 21:42:51.353: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 26 21:42:51.353: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jan 26 21:43:31.380: INFO: Deleting all statefulset in ns statefulset-9743
Jan 26 21:43:31.384: INFO: Scaling statefulset ss to 0
Jan 26 21:43:31.422: INFO: Waiting for statefulset status.replicas updated to 0
Jan 26 21:43:31.425: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:43:31.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9743" for this suite.

• [SLOW TEST:115.702 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":120,"skipped":1882,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:43:31.499: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 26 21:43:31.595: INFO: Waiting up to 5m0s for pod "downwardapi-volume-71dff4da-a0fa-445b-a971-1802756386bc" in namespace "downward-api-6840" to be "success or failure"
Jan 26 21:43:31.610: INFO: Pod "downwardapi-volume-71dff4da-a0fa-445b-a971-1802756386bc": Phase="Pending", Reason="", readiness=false. Elapsed: 14.384305ms
Jan 26 21:43:33.619: INFO: Pod "downwardapi-volume-71dff4da-a0fa-445b-a971-1802756386bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023672277s
Jan 26 21:43:36.363: INFO: Pod "downwardapi-volume-71dff4da-a0fa-445b-a971-1802756386bc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.768338386s
Jan 26 21:43:38.371: INFO: Pod "downwardapi-volume-71dff4da-a0fa-445b-a971-1802756386bc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.776226055s
Jan 26 21:43:40.378: INFO: Pod "downwardapi-volume-71dff4da-a0fa-445b-a971-1802756386bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.782641495s
STEP: Saw pod success
Jan 26 21:43:40.378: INFO: Pod "downwardapi-volume-71dff4da-a0fa-445b-a971-1802756386bc" satisfied condition "success or failure"
Jan 26 21:43:40.381: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-71dff4da-a0fa-445b-a971-1802756386bc container client-container: 
STEP: delete the pod
Jan 26 21:43:40.568: INFO: Waiting for pod downwardapi-volume-71dff4da-a0fa-445b-a971-1802756386bc to disappear
Jan 26 21:43:40.586: INFO: Pod downwardapi-volume-71dff4da-a0fa-445b-a971-1802756386bc no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:43:40.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6840" for this suite.

• [SLOW TEST:9.105 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":121,"skipped":1908,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:43:40.606: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-599484c9-459b-4176-8add-d2c123610744
STEP: Creating a pod to test consume secrets
Jan 26 21:43:40.765: INFO: Waiting up to 5m0s for pod "pod-secrets-220850d7-128c-4fef-aa25-c0cdf24140f2" in namespace "secrets-740" to be "success or failure"
Jan 26 21:43:40.787: INFO: Pod "pod-secrets-220850d7-128c-4fef-aa25-c0cdf24140f2": Phase="Pending", Reason="", readiness=false. Elapsed: 22.100834ms
Jan 26 21:43:42.817: INFO: Pod "pod-secrets-220850d7-128c-4fef-aa25-c0cdf24140f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052283875s
Jan 26 21:43:44.835: INFO: Pod "pod-secrets-220850d7-128c-4fef-aa25-c0cdf24140f2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070902127s
Jan 26 21:43:46.860: INFO: Pod "pod-secrets-220850d7-128c-4fef-aa25-c0cdf24140f2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.095695709s
Jan 26 21:43:48.871: INFO: Pod "pod-secrets-220850d7-128c-4fef-aa25-c0cdf24140f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.106770309s
STEP: Saw pod success
Jan 26 21:43:48.871: INFO: Pod "pod-secrets-220850d7-128c-4fef-aa25-c0cdf24140f2" satisfied condition "success or failure"
Jan 26 21:43:48.877: INFO: Trying to get logs from node jerma-node pod pod-secrets-220850d7-128c-4fef-aa25-c0cdf24140f2 container secret-volume-test: 
STEP: delete the pod
Jan 26 21:43:49.025: INFO: Waiting for pod pod-secrets-220850d7-128c-4fef-aa25-c0cdf24140f2 to disappear
Jan 26 21:43:49.091: INFO: Pod pod-secrets-220850d7-128c-4fef-aa25-c0cdf24140f2 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:43:49.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-740" for this suite.

• [SLOW TEST:8.507 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":122,"skipped":1962,"failed":0}
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:43:49.114: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 26 21:43:49.293: INFO: Waiting up to 5m0s for pod "pod-e5e270fb-ad2f-4146-889e-233894cafeb2" in namespace "emptydir-5839" to be "success or failure"
Jan 26 21:43:49.301: INFO: Pod "pod-e5e270fb-ad2f-4146-889e-233894cafeb2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.029253ms
Jan 26 21:43:51.309: INFO: Pod "pod-e5e270fb-ad2f-4146-889e-233894cafeb2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016241592s
Jan 26 21:43:53.317: INFO: Pod "pod-e5e270fb-ad2f-4146-889e-233894cafeb2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023744453s
Jan 26 21:43:55.337: INFO: Pod "pod-e5e270fb-ad2f-4146-889e-233894cafeb2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044101351s
Jan 26 21:43:57.345: INFO: Pod "pod-e5e270fb-ad2f-4146-889e-233894cafeb2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.052467222s
STEP: Saw pod success
Jan 26 21:43:57.345: INFO: Pod "pod-e5e270fb-ad2f-4146-889e-233894cafeb2" satisfied condition "success or failure"
Jan 26 21:43:57.352: INFO: Trying to get logs from node jerma-node pod pod-e5e270fb-ad2f-4146-889e-233894cafeb2 container test-container: 
STEP: delete the pod
Jan 26 21:43:57.409: INFO: Waiting for pod pod-e5e270fb-ad2f-4146-889e-233894cafeb2 to disappear
Jan 26 21:43:57.416: INFO: Pod pod-e5e270fb-ad2f-4146-889e-233894cafeb2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:43:57.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5839" for this suite.

• [SLOW TEST:8.377 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":123,"skipped":1962,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:43:57.491: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 26 21:43:57.670: INFO: Waiting up to 5m0s for pod "pod-baa37a3a-7606-4d7e-b6be-e2cba45def3c" in namespace "emptydir-4157" to be "success or failure"
Jan 26 21:43:57.677: INFO: Pod "pod-baa37a3a-7606-4d7e-b6be-e2cba45def3c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.841061ms
Jan 26 21:43:59.685: INFO: Pod "pod-baa37a3a-7606-4d7e-b6be-e2cba45def3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014997004s
Jan 26 21:44:01.693: INFO: Pod "pod-baa37a3a-7606-4d7e-b6be-e2cba45def3c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022769493s
Jan 26 21:44:03.716: INFO: Pod "pod-baa37a3a-7606-4d7e-b6be-e2cba45def3c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046197089s
Jan 26 21:44:05.723: INFO: Pod "pod-baa37a3a-7606-4d7e-b6be-e2cba45def3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.053315477s
STEP: Saw pod success
Jan 26 21:44:05.723: INFO: Pod "pod-baa37a3a-7606-4d7e-b6be-e2cba45def3c" satisfied condition "success or failure"
Jan 26 21:44:05.728: INFO: Trying to get logs from node jerma-node pod pod-baa37a3a-7606-4d7e-b6be-e2cba45def3c container test-container: 
STEP: delete the pod
Jan 26 21:44:05.767: INFO: Waiting for pod pod-baa37a3a-7606-4d7e-b6be-e2cba45def3c to disappear
Jan 26 21:44:05.773: INFO: Pod pod-baa37a3a-7606-4d7e-b6be-e2cba45def3c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:44:05.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4157" for this suite.

• [SLOW TEST:8.324 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":124,"skipped":1964,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:44:05.817: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0126 21:44:08.855777       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 26 21:44:08.855: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:44:08.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-446" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":125,"skipped":1985,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:44:08.874: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Jan 26 21:44:10.630: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Jan 26 21:44:12.646: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715671850, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715671850, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715671850, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715671850, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 21:44:14.652: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715671850, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715671850, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715671850, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715671850, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 21:44:16.654: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715671850, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715671850, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715671850, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715671850, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 21:44:18.656: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715671850, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715671850, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715671850, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715671850, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 26 21:44:21.686: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 26 21:44:21.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:44:23.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-2580" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:14.410 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":126,"skipped":1997,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:44:23.285: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3025.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3025.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 26 21:44:35.539: INFO: DNS probes using dns-3025/dns-test-f8ed9189-c970-4137-9de5-53457f89b999 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:44:35.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3025" for this suite.

• [SLOW TEST:12.346 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":278,"completed":127,"skipped":2017,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:44:35.634: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Jan 26 21:44:35.700: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Jan 26 21:44:49.271: INFO: >>> kubeConfig: /root/.kube/config
Jan 26 21:44:51.664: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:45:07.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4516" for this suite.

• [SLOW TEST:31.526 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":128,"skipped":2071,"failed":0}
SSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:45:07.160: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod test-webserver-92f81d7f-3190-4e38-94ff-d69ff1f96397 in namespace container-probe-758
Jan 26 21:45:15.264: INFO: Started pod test-webserver-92f81d7f-3190-4e38-94ff-d69ff1f96397 in namespace container-probe-758
STEP: checking the pod's current state and verifying that restartCount is present
Jan 26 21:45:15.270: INFO: Initial restart count of pod test-webserver-92f81d7f-3190-4e38-94ff-d69ff1f96397 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:49:17.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-758" for this suite.

• [SLOW TEST:250.168 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":129,"skipped":2076,"failed":0}
SSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:49:17.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 26 21:49:26.044: INFO: Successfully updated pod "pod-update-activedeadlineseconds-6260ae4d-24e5-4f71-9399-fd7a9797ac1b"
Jan 26 21:49:26.044: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-6260ae4d-24e5-4f71-9399-fd7a9797ac1b" in namespace "pods-3024" to be "terminated due to deadline exceeded"
Jan 26 21:49:26.073: INFO: Pod "pod-update-activedeadlineseconds-6260ae4d-24e5-4f71-9399-fd7a9797ac1b": Phase="Running", Reason="", readiness=true. Elapsed: 28.358095ms
Jan 26 21:49:28.385: INFO: Pod "pod-update-activedeadlineseconds-6260ae4d-24e5-4f71-9399-fd7a9797ac1b": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.340719303s
Jan 26 21:49:28.385: INFO: Pod "pod-update-activedeadlineseconds-6260ae4d-24e5-4f71-9399-fd7a9797ac1b" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:49:28.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3024" for this suite.

• [SLOW TEST:11.088 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":130,"skipped":2082,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:49:28.418: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 26 21:49:29.430: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 26 21:49:31.505: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672169, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672169, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672169, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672169, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 21:49:33.512: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672169, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672169, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672169, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672169, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 21:49:35.515: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672169, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672169, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672169, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672169, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 21:49:37.514: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672169, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672169, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672169, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672169, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 26 21:49:40.553: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:49:40.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4485" for this suite.
STEP: Destroying namespace "webhook-4485-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.418 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":131,"skipped":2124,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:49:40.840: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test hostPath mode
Jan 26 21:49:40.949: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-3302" to be "success or failure"
Jan 26 21:49:40.959: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 9.711277ms
Jan 26 21:49:42.965: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015569988s
Jan 26 21:49:44.973: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024074876s
Jan 26 21:49:46.983: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034201869s
Jan 26 21:49:48.992: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.042909523s
Jan 26 21:49:51.041: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.091987648s
Jan 26 21:49:53.047: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.097529577s
STEP: Saw pod success
Jan 26 21:49:53.047: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jan 26 21:49:53.049: INFO: Trying to get logs from node jerma-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jan 26 21:49:53.197: INFO: Waiting for pod pod-host-path-test to disappear
Jan 26 21:49:53.220: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:49:53.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-3302" for this suite.

• [SLOW TEST:12.395 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":132,"skipped":2156,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:49:53.237: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 26 21:49:53.394: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9ff1f594-541e-4dfc-af6a-4c470245c011" in namespace "projected-5874" to be "success or failure"
Jan 26 21:49:53.402: INFO: Pod "downwardapi-volume-9ff1f594-541e-4dfc-af6a-4c470245c011": Phase="Pending", Reason="", readiness=false. Elapsed: 7.927907ms
Jan 26 21:49:55.411: INFO: Pod "downwardapi-volume-9ff1f594-541e-4dfc-af6a-4c470245c011": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016964553s
Jan 26 21:49:57.423: INFO: Pod "downwardapi-volume-9ff1f594-541e-4dfc-af6a-4c470245c011": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028101213s
Jan 26 21:49:59.762: INFO: Pod "downwardapi-volume-9ff1f594-541e-4dfc-af6a-4c470245c011": Phase="Pending", Reason="", readiness=false. Elapsed: 6.367459472s
Jan 26 21:50:01.771: INFO: Pod "downwardapi-volume-9ff1f594-541e-4dfc-af6a-4c470245c011": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.37613772s
STEP: Saw pod success
Jan 26 21:50:01.771: INFO: Pod "downwardapi-volume-9ff1f594-541e-4dfc-af6a-4c470245c011" satisfied condition "success or failure"
Jan 26 21:50:01.776: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-9ff1f594-541e-4dfc-af6a-4c470245c011 container client-container: 
STEP: delete the pod
Jan 26 21:50:01.917: INFO: Waiting for pod downwardapi-volume-9ff1f594-541e-4dfc-af6a-4c470245c011 to disappear
Jan 26 21:50:01.925: INFO: Pod downwardapi-volume-9ff1f594-541e-4dfc-af6a-4c470245c011 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:50:01.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5874" for this suite.

• [SLOW TEST:8.710 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":133,"skipped":2195,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:50:01.948: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-2ccd0589-4213-4839-8bf5-fffbe07edd70
STEP: Creating a pod to test consume secrets
Jan 26 21:50:02.088: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0a297f73-e598-4e8b-a87b-648e29941d78" in namespace "projected-9469" to be "success or failure"
Jan 26 21:50:02.103: INFO: Pod "pod-projected-secrets-0a297f73-e598-4e8b-a87b-648e29941d78": Phase="Pending", Reason="", readiness=false. Elapsed: 14.721227ms
Jan 26 21:50:04.138: INFO: Pod "pod-projected-secrets-0a297f73-e598-4e8b-a87b-648e29941d78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049223417s
Jan 26 21:50:06.147: INFO: Pod "pod-projected-secrets-0a297f73-e598-4e8b-a87b-648e29941d78": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058265403s
Jan 26 21:50:08.153: INFO: Pod "pod-projected-secrets-0a297f73-e598-4e8b-a87b-648e29941d78": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064200566s
Jan 26 21:50:10.162: INFO: Pod "pod-projected-secrets-0a297f73-e598-4e8b-a87b-648e29941d78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.073443686s
STEP: Saw pod success
Jan 26 21:50:10.162: INFO: Pod "pod-projected-secrets-0a297f73-e598-4e8b-a87b-648e29941d78" satisfied condition "success or failure"
Jan 26 21:50:10.166: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-0a297f73-e598-4e8b-a87b-648e29941d78 container projected-secret-volume-test: 
STEP: delete the pod
Jan 26 21:50:10.208: INFO: Waiting for pod pod-projected-secrets-0a297f73-e598-4e8b-a87b-648e29941d78 to disappear
Jan 26 21:50:10.212: INFO: Pod pod-projected-secrets-0a297f73-e598-4e8b-a87b-648e29941d78 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:50:10.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9469" for this suite.

• [SLOW TEST:8.278 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":134,"skipped":2204,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:50:10.228: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-caf1a63f-e6c7-44e2-a0f4-f4f558c4af60
STEP: Creating a pod to test consume configMaps
Jan 26 21:50:10.547: INFO: Waiting up to 5m0s for pod "pod-configmaps-e35259e1-4e4d-47a8-b364-f404e3ad9d26" in namespace "configmap-9207" to be "success or failure"
Jan 26 21:50:10.564: INFO: Pod "pod-configmaps-e35259e1-4e4d-47a8-b364-f404e3ad9d26": Phase="Pending", Reason="", readiness=false. Elapsed: 17.271355ms
Jan 26 21:50:12.579: INFO: Pod "pod-configmaps-e35259e1-4e4d-47a8-b364-f404e3ad9d26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031371862s
Jan 26 21:50:14.599: INFO: Pod "pod-configmaps-e35259e1-4e4d-47a8-b364-f404e3ad9d26": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051971109s
Jan 26 21:50:16.622: INFO: Pod "pod-configmaps-e35259e1-4e4d-47a8-b364-f404e3ad9d26": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074381349s
Jan 26 21:50:18.629: INFO: Pod "pod-configmaps-e35259e1-4e4d-47a8-b364-f404e3ad9d26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.082126417s
STEP: Saw pod success
Jan 26 21:50:18.629: INFO: Pod "pod-configmaps-e35259e1-4e4d-47a8-b364-f404e3ad9d26" satisfied condition "success or failure"
Jan 26 21:50:18.634: INFO: Trying to get logs from node jerma-node pod pod-configmaps-e35259e1-4e4d-47a8-b364-f404e3ad9d26 container configmap-volume-test: 
STEP: delete the pod
Jan 26 21:50:18.692: INFO: Waiting for pod pod-configmaps-e35259e1-4e4d-47a8-b364-f404e3ad9d26 to disappear
Jan 26 21:50:18.761: INFO: Pod pod-configmaps-e35259e1-4e4d-47a8-b364-f404e3ad9d26 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:50:18.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9207" for this suite.

• [SLOW TEST:8.549 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":135,"skipped":2246,"failed":0}
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:50:18.778: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 26 21:50:18.929: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7b153ff0-7dfe-4f7c-bbda-8fce8dad8c9f" in namespace "projected-2964" to be "success or failure"
Jan 26 21:50:18.938: INFO: Pod "downwardapi-volume-7b153ff0-7dfe-4f7c-bbda-8fce8dad8c9f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.084223ms
Jan 26 21:50:20.949: INFO: Pod "downwardapi-volume-7b153ff0-7dfe-4f7c-bbda-8fce8dad8c9f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019095811s
Jan 26 21:50:22.958: INFO: Pod "downwardapi-volume-7b153ff0-7dfe-4f7c-bbda-8fce8dad8c9f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028831365s
Jan 26 21:50:24.977: INFO: Pod "downwardapi-volume-7b153ff0-7dfe-4f7c-bbda-8fce8dad8c9f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047412428s
Jan 26 21:50:26.982: INFO: Pod "downwardapi-volume-7b153ff0-7dfe-4f7c-bbda-8fce8dad8c9f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052303844s
Jan 26 21:50:28.987: INFO: Pod "downwardapi-volume-7b153ff0-7dfe-4f7c-bbda-8fce8dad8c9f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.057697374s
STEP: Saw pod success
Jan 26 21:50:28.987: INFO: Pod "downwardapi-volume-7b153ff0-7dfe-4f7c-bbda-8fce8dad8c9f" satisfied condition "success or failure"
Jan 26 21:50:28.989: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-7b153ff0-7dfe-4f7c-bbda-8fce8dad8c9f container client-container: 
STEP: delete the pod
Jan 26 21:50:29.050: INFO: Waiting for pod downwardapi-volume-7b153ff0-7dfe-4f7c-bbda-8fce8dad8c9f to disappear
Jan 26 21:50:29.060: INFO: Pod downwardapi-volume-7b153ff0-7dfe-4f7c-bbda-8fce8dad8c9f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:50:29.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2964" for this suite.

• [SLOW TEST:10.355 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":136,"skipped":2248,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:50:29.135: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-8323/configmap-test-7b8fb3a5-ad64-42d6-bc50-f440441a4eed
STEP: Creating a pod to test consume configMaps
Jan 26 21:50:29.316: INFO: Waiting up to 5m0s for pod "pod-configmaps-ced8c8af-eb29-480c-80c2-a74170e7d31e" in namespace "configmap-8323" to be "success or failure"
Jan 26 21:50:29.351: INFO: Pod "pod-configmaps-ced8c8af-eb29-480c-80c2-a74170e7d31e": Phase="Pending", Reason="", readiness=false. Elapsed: 35.402768ms
Jan 26 21:50:31.359: INFO: Pod "pod-configmaps-ced8c8af-eb29-480c-80c2-a74170e7d31e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042808106s
Jan 26 21:50:33.365: INFO: Pod "pod-configmaps-ced8c8af-eb29-480c-80c2-a74170e7d31e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048640926s
Jan 26 21:50:35.373: INFO: Pod "pod-configmaps-ced8c8af-eb29-480c-80c2-a74170e7d31e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057330744s
Jan 26 21:50:37.395: INFO: Pod "pod-configmaps-ced8c8af-eb29-480c-80c2-a74170e7d31e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.079608843s
STEP: Saw pod success
Jan 26 21:50:37.396: INFO: Pod "pod-configmaps-ced8c8af-eb29-480c-80c2-a74170e7d31e" satisfied condition "success or failure"
Jan 26 21:50:37.398: INFO: Trying to get logs from node jerma-node pod pod-configmaps-ced8c8af-eb29-480c-80c2-a74170e7d31e container env-test: 
STEP: delete the pod
Jan 26 21:50:37.459: INFO: Waiting for pod pod-configmaps-ced8c8af-eb29-480c-80c2-a74170e7d31e to disappear
Jan 26 21:50:37.478: INFO: Pod pod-configmaps-ced8c8af-eb29-480c-80c2-a74170e7d31e no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:50:37.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8323" for this suite.

• [SLOW TEST:8.350 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":137,"skipped":2266,"failed":0}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:50:37.486: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 26 21:50:37.638: INFO: Waiting up to 5m0s for pod "pod-383cc51f-0427-4b63-ba9d-5616ac81d608" in namespace "emptydir-5415" to be "success or failure"
Jan 26 21:50:37.653: INFO: Pod "pod-383cc51f-0427-4b63-ba9d-5616ac81d608": Phase="Pending", Reason="", readiness=false. Elapsed: 14.658795ms
Jan 26 21:50:39.662: INFO: Pod "pod-383cc51f-0427-4b63-ba9d-5616ac81d608": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024394148s
Jan 26 21:50:41.670: INFO: Pod "pod-383cc51f-0427-4b63-ba9d-5616ac81d608": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032419963s
Jan 26 21:50:43.678: INFO: Pod "pod-383cc51f-0427-4b63-ba9d-5616ac81d608": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040202417s
Jan 26 21:50:45.688: INFO: Pod "pod-383cc51f-0427-4b63-ba9d-5616ac81d608": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.050463378s
STEP: Saw pod success
Jan 26 21:50:45.689: INFO: Pod "pod-383cc51f-0427-4b63-ba9d-5616ac81d608" satisfied condition "success or failure"
Jan 26 21:50:45.694: INFO: Trying to get logs from node jerma-node pod pod-383cc51f-0427-4b63-ba9d-5616ac81d608 container test-container: 
STEP: delete the pod
Jan 26 21:50:45.749: INFO: Waiting for pod pod-383cc51f-0427-4b63-ba9d-5616ac81d608 to disappear
Jan 26 21:50:45.758: INFO: Pod pod-383cc51f-0427-4b63-ba9d-5616ac81d608 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:50:45.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5415" for this suite.

• [SLOW TEST:8.290 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":138,"skipped":2269,"failed":0}
SSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:50:45.778: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-5678daa1-8025-473c-ab8d-086a53745b47
STEP: Creating a pod to test consume secrets
Jan 26 21:50:46.018: INFO: Waiting up to 5m0s for pod "pod-secrets-9bd8804e-c4d4-434b-995e-6786561b7da9" in namespace "secrets-1550" to be "success or failure"
Jan 26 21:50:46.024: INFO: Pod "pod-secrets-9bd8804e-c4d4-434b-995e-6786561b7da9": Phase="Pending", Reason="", readiness=false. Elapsed: 5.852875ms
Jan 26 21:50:48.030: INFO: Pod "pod-secrets-9bd8804e-c4d4-434b-995e-6786561b7da9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011614582s
Jan 26 21:50:50.038: INFO: Pod "pod-secrets-9bd8804e-c4d4-434b-995e-6786561b7da9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01916176s
Jan 26 21:50:52.045: INFO: Pod "pod-secrets-9bd8804e-c4d4-434b-995e-6786561b7da9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027095536s
Jan 26 21:50:54.052: INFO: Pod "pod-secrets-9bd8804e-c4d4-434b-995e-6786561b7da9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.033508804s
STEP: Saw pod success
Jan 26 21:50:54.052: INFO: Pod "pod-secrets-9bd8804e-c4d4-434b-995e-6786561b7da9" satisfied condition "success or failure"
Jan 26 21:50:54.054: INFO: Trying to get logs from node jerma-node pod pod-secrets-9bd8804e-c4d4-434b-995e-6786561b7da9 container secret-env-test: 
STEP: delete the pod
Jan 26 21:50:54.097: INFO: Waiting for pod pod-secrets-9bd8804e-c4d4-434b-995e-6786561b7da9 to disappear
Jan 26 21:50:54.126: INFO: Pod pod-secrets-9bd8804e-c4d4-434b-995e-6786561b7da9 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:50:54.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1550" for this suite.

• [SLOW TEST:8.469 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":139,"skipped":2273,"failed":0}
SSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:50:54.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-49a05e7d-effc-43d0-be1f-ae381161d225 in namespace container-probe-8786
Jan 26 21:51:02.485: INFO: Started pod liveness-49a05e7d-effc-43d0-be1f-ae381161d225 in namespace container-probe-8786
STEP: checking the pod's current state and verifying that restartCount is present
Jan 26 21:51:02.489: INFO: Initial restart count of pod liveness-49a05e7d-effc-43d0-be1f-ae381161d225 is 0
Jan 26 21:51:28.677: INFO: Restart count of pod container-probe-8786/liveness-49a05e7d-effc-43d0-be1f-ae381161d225 is now 1 (26.187398838s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:51:28.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8786" for this suite.

• [SLOW TEST:34.566 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":140,"skipped":2279,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:51:28.816: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jan 26 21:51:29.050: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-9434 /api/v1/namespaces/watch-9434/configmaps/e2e-watch-test-label-changed 8ad05552-f688-4073-9666-9f66c9e2cac9 4545669 0 2020-01-26 21:51:29 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 26 21:51:29.050: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-9434 /api/v1/namespaces/watch-9434/configmaps/e2e-watch-test-label-changed 8ad05552-f688-4073-9666-9f66c9e2cac9 4545671 0 2020-01-26 21:51:29 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan 26 21:51:29.050: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-9434 /api/v1/namespaces/watch-9434/configmaps/e2e-watch-test-label-changed 8ad05552-f688-4073-9666-9f66c9e2cac9 4545672 0 2020-01-26 21:51:29 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jan 26 21:51:39.086: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-9434 /api/v1/namespaces/watch-9434/configmaps/e2e-watch-test-label-changed 8ad05552-f688-4073-9666-9f66c9e2cac9 4545707 0 2020-01-26 21:51:29 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 26 21:51:39.086: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-9434 /api/v1/namespaces/watch-9434/configmaps/e2e-watch-test-label-changed 8ad05552-f688-4073-9666-9f66c9e2cac9 4545708 0 2020-01-26 21:51:29 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Jan 26 21:51:39.086: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-9434 /api/v1/namespaces/watch-9434/configmaps/e2e-watch-test-label-changed 8ad05552-f688-4073-9666-9f66c9e2cac9 4545709 0 2020-01-26 21:51:29 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:51:39.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9434" for this suite.

• [SLOW TEST:10.282 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":141,"skipped":2300,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:51:39.098: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-7028
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating stateful set ss in namespace statefulset-7028
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7028
Jan 26 21:51:39.295: INFO: Found 0 stateful pods, waiting for 1
Jan 26 21:51:49.304: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jan 26 21:51:49.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 26 21:51:51.895: INFO: stderr: "I0126 21:51:51.645021    3016 log.go:172] (0xc0000f7760) (0xc0006e5f40) Create stream\nI0126 21:51:51.645219    3016 log.go:172] (0xc0000f7760) (0xc0006e5f40) Stream added, broadcasting: 1\nI0126 21:51:51.650876    3016 log.go:172] (0xc0000f7760) Reply frame received for 1\nI0126 21:51:51.650930    3016 log.go:172] (0xc0000f7760) (0xc000664780) Create stream\nI0126 21:51:51.650942    3016 log.go:172] (0xc0000f7760) (0xc000664780) Stream added, broadcasting: 3\nI0126 21:51:51.657056    3016 log.go:172] (0xc0000f7760) Reply frame received for 3\nI0126 21:51:51.657164    3016 log.go:172] (0xc0000f7760) (0xc000433540) Create stream\nI0126 21:51:51.657189    3016 log.go:172] (0xc0000f7760) (0xc000433540) Stream added, broadcasting: 5\nI0126 21:51:51.659628    3016 log.go:172] (0xc0000f7760) Reply frame received for 5\nI0126 21:51:51.761561    3016 log.go:172] (0xc0000f7760) Data frame received for 5\nI0126 21:51:51.761734    3016 log.go:172] (0xc000433540) (5) Data frame handling\nI0126 21:51:51.761786    3016 log.go:172] (0xc000433540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0126 21:51:51.789883    3016 log.go:172] (0xc0000f7760) Data frame received for 3\nI0126 21:51:51.790006    3016 log.go:172] (0xc000664780) (3) Data frame handling\nI0126 21:51:51.790060    3016 log.go:172] (0xc000664780) (3) Data frame sent\nI0126 21:51:51.882180    3016 log.go:172] (0xc0000f7760) (0xc000664780) Stream removed, broadcasting: 3\nI0126 21:51:51.882414    3016 log.go:172] (0xc0000f7760) (0xc000433540) Stream removed, broadcasting: 5\nI0126 21:51:51.882618    3016 log.go:172] (0xc0000f7760) Data frame received for 1\nI0126 21:51:51.882700    3016 log.go:172] (0xc0006e5f40) (1) Data frame handling\nI0126 21:51:51.882734    3016 log.go:172] (0xc0006e5f40) (1) Data frame sent\nI0126 21:51:51.882821    3016 log.go:172] (0xc0000f7760) (0xc0006e5f40) Stream removed, broadcasting: 1\nI0126 21:51:51.882875    3016 log.go:172] (0xc0000f7760) Go away received\nI0126 21:51:51.884420    3016 log.go:172] (0xc0000f7760) (0xc0006e5f40) Stream removed, broadcasting: 1\nI0126 21:51:51.884442    3016 log.go:172] (0xc0000f7760) (0xc000664780) Stream removed, broadcasting: 3\nI0126 21:51:51.884452    3016 log.go:172] (0xc0000f7760) (0xc000433540) Stream removed, broadcasting: 5\n"
Jan 26 21:51:51.895: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 26 21:51:51.895: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 26 21:51:51.901: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 26 21:52:01.916: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 26 21:52:01.916: INFO: Waiting for statefulset status.replicas updated to 0
Jan 26 21:52:01.942: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan 26 21:52:01.942: INFO: ss-0  jerma-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:51:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:51:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:51:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:51:39 +0000 UTC  }]
Jan 26 21:52:01.942: INFO: 
Jan 26 21:52:01.942: INFO: StatefulSet ss has not reached scale 3, at 1
Jan 26 21:52:03.780: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.990649766s
Jan 26 21:52:04.974: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.153084402s
Jan 26 21:52:05.981: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.958857187s
Jan 26 21:52:06.992: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.951873754s
Jan 26 21:52:08.089: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.940342079s
Jan 26 21:52:09.136: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.843599691s
Jan 26 21:52:10.244: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.796852702s
Jan 26 21:52:11.265: INFO: Verifying statefulset ss doesn't scale past 3 for another 688.190726ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7028
Jan 26 21:52:12.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 26 21:52:12.716: INFO: stderr: "I0126 21:52:12.463899    3049 log.go:172] (0xc000b70bb0) (0xc0006cbcc0) Create stream\nI0126 21:52:12.464361    3049 log.go:172] (0xc000b70bb0) (0xc0006cbcc0) Stream added, broadcasting: 1\nI0126 21:52:12.474428    3049 log.go:172] (0xc000b70bb0) Reply frame received for 1\nI0126 21:52:12.474767    3049 log.go:172] (0xc000b70bb0) (0xc000b981e0) Create stream\nI0126 21:52:12.474818    3049 log.go:172] (0xc000b70bb0) (0xc000b981e0) Stream added, broadcasting: 3\nI0126 21:52:12.477280    3049 log.go:172] (0xc000b70bb0) Reply frame received for 3\nI0126 21:52:12.477333    3049 log.go:172] (0xc000b70bb0) (0xc000660640) Create stream\nI0126 21:52:12.477345    3049 log.go:172] (0xc000b70bb0) (0xc000660640) Stream added, broadcasting: 5\nI0126 21:52:12.479106    3049 log.go:172] (0xc000b70bb0) Reply frame received for 5\nI0126 21:52:12.577459    3049 log.go:172] (0xc000b70bb0) Data frame received for 3\nI0126 21:52:12.577840    3049 log.go:172] (0xc000b981e0) (3) Data frame handling\nI0126 21:52:12.577896    3049 log.go:172] (0xc000b981e0) (3) Data frame sent\nI0126 21:52:12.578036    3049 log.go:172] (0xc000b70bb0) Data frame received for 5\nI0126 21:52:12.578073    3049 log.go:172] (0xc000660640) (5) Data frame handling\nI0126 21:52:12.578098    3049 log.go:172] (0xc000660640) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0126 21:52:12.697289    3049 log.go:172] (0xc000b70bb0) (0xc000660640) Stream removed, broadcasting: 5\nI0126 21:52:12.697576    3049 log.go:172] (0xc000b70bb0) (0xc000b981e0) Stream removed, broadcasting: 3\nI0126 21:52:12.697670    3049 log.go:172] (0xc000b70bb0) Data frame received for 1\nI0126 21:52:12.697700    3049 log.go:172] (0xc0006cbcc0) (1) Data frame handling\nI0126 21:52:12.697717    3049 log.go:172] (0xc0006cbcc0) (1) Data frame sent\nI0126 21:52:12.697735    3049 log.go:172] (0xc000b70bb0) (0xc0006cbcc0) Stream removed, broadcasting: 1\nI0126 21:52:12.697753    3049 log.go:172] (0xc000b70bb0) Go away received\nI0126 21:52:12.699245    3049 log.go:172] (0xc000b70bb0) (0xc0006cbcc0) Stream removed, broadcasting: 1\nI0126 21:52:12.699266    3049 log.go:172] (0xc000b70bb0) (0xc000b981e0) Stream removed, broadcasting: 3\nI0126 21:52:12.699293    3049 log.go:172] (0xc000b70bb0) (0xc000660640) Stream removed, broadcasting: 5\n"
Jan 26 21:52:12.716: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 26 21:52:12.716: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 26 21:52:12.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 26 21:52:13.107: INFO: stderr: "I0126 21:52:12.919789    3069 log.go:172] (0xc000694b00) (0xc0007bfd60) Create stream\nI0126 21:52:12.919968    3069 log.go:172] (0xc000694b00) (0xc0007bfd60) Stream added, broadcasting: 1\nI0126 21:52:12.925002    3069 log.go:172] (0xc000694b00) Reply frame received for 1\nI0126 21:52:12.925044    3069 log.go:172] (0xc000694b00) (0xc0007bfe00) Create stream\nI0126 21:52:12.925056    3069 log.go:172] (0xc000694b00) (0xc0007bfe00) Stream added, broadcasting: 3\nI0126 21:52:12.926659    3069 log.go:172] (0xc000694b00) Reply frame received for 3\nI0126 21:52:12.926696    3069 log.go:172] (0xc000694b00) (0xc000642640) Create stream\nI0126 21:52:12.926709    3069 log.go:172] (0xc000694b00) (0xc000642640) Stream added, broadcasting: 5\nI0126 21:52:12.927645    3069 log.go:172] (0xc000694b00) Reply frame received for 5\nI0126 21:52:12.999409    3069 log.go:172] (0xc000694b00) Data frame received for 3\nI0126 21:52:12.999460    3069 log.go:172] (0xc0007bfe00) (3) Data frame handling\nI0126 21:52:12.999475    3069 log.go:172] (0xc0007bfe00) (3) Data frame sent\nI0126 21:52:13.000596    3069 log.go:172] (0xc000694b00) Data frame received for 5\nI0126 21:52:13.000658    3069 log.go:172] (0xc000642640) (5) Data frame handling\nI0126 21:52:13.000675    3069 log.go:172] (0xc000642640) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0126 21:52:13.095613    3069 log.go:172] (0xc000694b00) Data frame received for 1\nI0126 21:52:13.095700    3069 log.go:172] (0xc0007bfd60) (1) Data frame handling\nI0126 21:52:13.095759    3069 log.go:172] (0xc0007bfd60) (1) Data frame sent\nI0126 21:52:13.095817    3069 log.go:172] (0xc000694b00) (0xc0007bfd60) Stream removed, broadcasting: 1\nI0126 21:52:13.096572    3069 log.go:172] (0xc000694b00) (0xc0007bfe00) Stream removed, broadcasting: 3\nI0126 21:52:13.096876    3069 log.go:172] (0xc000694b00) (0xc000642640) Stream removed, broadcasting: 5\nI0126 21:52:13.096917    3069 log.go:172] (0xc000694b00) (0xc0007bfd60) Stream removed, broadcasting: 1\nI0126 21:52:13.096936    3069 log.go:172] (0xc000694b00) (0xc0007bfe00) Stream removed, broadcasting: 3\nI0126 21:52:13.096946    3069 log.go:172] (0xc000694b00) (0xc000642640) Stream removed, broadcasting: 5\n"
Jan 26 21:52:13.107: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 26 21:52:13.107: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 26 21:52:13.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 26 21:52:13.462: INFO: stderr: "I0126 21:52:13.309664    3090 log.go:172] (0xc000bafce0) (0xc000ab2960) Create stream\nI0126 21:52:13.309848    3090 log.go:172] (0xc000bafce0) (0xc000ab2960) Stream added, broadcasting: 1\nI0126 21:52:13.316524    3090 log.go:172] (0xc000bafce0) Reply frame received for 1\nI0126 21:52:13.316584    3090 log.go:172] (0xc000bafce0) (0xc0006d5cc0) Create stream\nI0126 21:52:13.316593    3090 log.go:172] (0xc000bafce0) (0xc0006d5cc0) Stream added, broadcasting: 3\nI0126 21:52:13.317780    3090 log.go:172] (0xc000bafce0) Reply frame received for 3\nI0126 21:52:13.317801    3090 log.go:172] (0xc000bafce0) (0xc0006ac8c0) Create stream\nI0126 21:52:13.317806    3090 log.go:172] (0xc000bafce0) (0xc0006ac8c0) Stream added, broadcasting: 5\nI0126 21:52:13.319006    3090 log.go:172] (0xc000bafce0) Reply frame received for 5\nI0126 21:52:13.388022    3090 log.go:172] (0xc000bafce0) Data frame received for 5\nI0126 21:52:13.388172    3090 log.go:172] (0xc0006ac8c0) (5) Data frame handling\nI0126 21:52:13.388212    3090 log.go:172] (0xc0006ac8c0) (5) Data frame sent\nI0126 21:52:13.388221    3090 log.go:172] (0xc000bafce0) Data frame received for 5\nI0126 21:52:13.388237    3090 log.go:172] (0xc0006ac8c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0126 21:52:13.388302    3090 log.go:172] (0xc0006ac8c0) (5) Data frame sent\nI0126 21:52:13.388331    3090 log.go:172] (0xc000bafce0) Data frame received for 3\nI0126 21:52:13.388345    3090 log.go:172] (0xc0006d5cc0) (3) Data frame handling\nI0126 21:52:13.388369    3090 log.go:172] (0xc0006d5cc0) (3) Data frame sent\nI0126 21:52:13.452403    3090 log.go:172] (0xc000bafce0) (0xc0006ac8c0) Stream removed, broadcasting: 5\nI0126 21:52:13.452609    3090 log.go:172] (0xc000bafce0) Data frame received for 1\nI0126 21:52:13.452679    3090 log.go:172] (0xc000bafce0) (0xc0006d5cc0) Stream removed, broadcasting: 3\nI0126 21:52:13.452786    3090 log.go:172] (0xc000ab2960) (1) Data frame handling\nI0126 21:52:13.452819    3090 log.go:172] (0xc000ab2960) (1) Data frame sent\nI0126 21:52:13.452842    3090 log.go:172] (0xc000bafce0) (0xc000ab2960) Stream removed, broadcasting: 1\nI0126 21:52:13.452860    3090 log.go:172] (0xc000bafce0) Go away received\nI0126 21:52:13.453880    3090 log.go:172] (0xc000bafce0) (0xc000ab2960) Stream removed, broadcasting: 1\nI0126 21:52:13.453895    3090 log.go:172] (0xc000bafce0) (0xc0006d5cc0) Stream removed, broadcasting: 3\nI0126 21:52:13.453903    3090 log.go:172] (0xc000bafce0) (0xc0006ac8c0) Stream removed, broadcasting: 5\n"
Jan 26 21:52:13.462: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 26 21:52:13.462: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 26 21:52:13.469: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false
Jan 26 21:52:23.477: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 26 21:52:23.477: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 26 21:52:23.477: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jan 26 21:52:23.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 26 21:52:23.969: INFO: stderr: "I0126 21:52:23.753849    3111 log.go:172] (0xc000012e70) (0xc000990000) Create stream\nI0126 21:52:23.754234    3111 log.go:172] (0xc000012e70) (0xc000990000) Stream added, broadcasting: 1\nI0126 21:52:23.759617    3111 log.go:172] (0xc000012e70) Reply frame received for 1\nI0126 21:52:23.759760    3111 log.go:172] (0xc000012e70) (0xc0006a5ae0) Create stream\nI0126 21:52:23.759794    3111 log.go:172] (0xc000012e70) (0xc0006a5ae0) Stream added, broadcasting: 3\nI0126 21:52:23.763707    3111 log.go:172] (0xc000012e70) Reply frame received for 3\nI0126 21:52:23.763836    3111 log.go:172] (0xc000012e70) (0xc0006a5cc0) Create stream\nI0126 21:52:23.763867    3111 log.go:172] (0xc000012e70) (0xc0006a5cc0) Stream added, broadcasting: 5\nI0126 21:52:23.765589    3111 log.go:172] (0xc000012e70) Reply frame received for 5\nI0126 21:52:23.843006    3111 log.go:172] (0xc000012e70) Data frame received for 3\nI0126 21:52:23.843320    3111 log.go:172] (0xc000012e70) Data frame received for 5\nI0126 21:52:23.843541    3111 log.go:172] (0xc0006a5cc0) (5) Data frame handling\nI0126 21:52:23.843591    3111 log.go:172] (0xc0006a5cc0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0126 21:52:23.843673    3111 log.go:172] (0xc0006a5ae0) (3) Data frame handling\nI0126 21:52:23.843748    3111 log.go:172] (0xc0006a5ae0) (3) Data frame sent\nI0126 21:52:23.950006    3111 log.go:172] (0xc000012e70) Data frame received for 1\nI0126 21:52:23.950374    3111 log.go:172] (0xc000012e70) (0xc0006a5cc0) Stream removed, broadcasting: 5\nI0126 21:52:23.950483    3111 log.go:172] (0xc000990000) (1) Data frame handling\nI0126 21:52:23.950512    3111 log.go:172] (0xc000990000) (1) Data frame sent\nI0126 21:52:23.950593    3111 log.go:172] (0xc000012e70) (0xc0006a5ae0) Stream removed, broadcasting: 3\nI0126 21:52:23.950679    3111 log.go:172] (0xc000012e70) (0xc000990000) Stream removed, broadcasting: 1\nI0126 21:52:23.950743    3111 log.go:172] (0xc000012e70) Go away received\nI0126 21:52:23.953403    3111 log.go:172] (0xc000012e70) (0xc000990000) Stream removed, broadcasting: 1\nI0126 21:52:23.953603    3111 log.go:172] (0xc000012e70) (0xc0006a5ae0) Stream removed, broadcasting: 3\nI0126 21:52:23.953638    3111 log.go:172] (0xc000012e70) (0xc0006a5cc0) Stream removed, broadcasting: 5\n"
Jan 26 21:52:23.969: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 26 21:52:23.969: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 26 21:52:23.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 26 21:52:24.417: INFO: stderr: "I0126 21:52:24.209610    3131 log.go:172] (0xc000a920b0) (0xc000b20820) Create stream\nI0126 21:52:24.209859    3131 log.go:172] (0xc000a920b0) (0xc000b20820) Stream added, broadcasting: 1\nI0126 21:52:24.213850    3131 log.go:172] (0xc000a920b0) Reply frame received for 1\nI0126 21:52:24.213954    3131 log.go:172] (0xc000a920b0) (0xc000b208c0) Create stream\nI0126 21:52:24.213962    3131 log.go:172] (0xc000a920b0) (0xc000b208c0) Stream added, broadcasting: 3\nI0126 21:52:24.215027    3131 log.go:172] (0xc000a920b0) Reply frame received for 3\nI0126 21:52:24.215048    3131 log.go:172] (0xc000a920b0) (0xc00062dc20) Create stream\nI0126 21:52:24.215054    3131 log.go:172] (0xc000a920b0) (0xc00062dc20) Stream added, broadcasting: 5\nI0126 21:52:24.216074    3131 log.go:172] (0xc000a920b0) Reply frame received for 5\nI0126 21:52:24.288732    3131 log.go:172] (0xc000a920b0) Data frame received for 5\nI0126 21:52:24.288777    3131 log.go:172] (0xc00062dc20) (5) Data frame handling\nI0126 21:52:24.288809    3131 log.go:172] (0xc00062dc20) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0126 21:52:24.321989    3131 log.go:172] (0xc000a920b0) Data frame received for 3\nI0126 21:52:24.322017    3131 log.go:172] (0xc000b208c0) (3) Data frame handling\nI0126 21:52:24.322038    3131 log.go:172] (0xc000b208c0) (3) Data frame sent\nI0126 21:52:24.394309    3131 log.go:172] (0xc000a920b0) Data frame received for 1\nI0126 21:52:24.394413    3131 log.go:172] (0xc000a920b0) (0xc00062dc20) Stream removed, broadcasting: 5\nI0126 21:52:24.394585    3131 log.go:172] (0xc000a920b0) (0xc000b208c0) Stream removed, broadcasting: 3\nI0126 21:52:24.394654    3131 log.go:172] (0xc000b20820) (1) Data frame handling\nI0126 21:52:24.394689    3131 log.go:172] (0xc000b20820) (1) Data frame sent\nI0126 21:52:24.394698    3131 log.go:172] (0xc000a920b0) (0xc000b20820) Stream removed, broadcasting: 1\nI0126 21:52:24.394720    3131 log.go:172] (0xc000a920b0) Go away received\nI0126 21:52:24.408083    3131 log.go:172] (0xc000a920b0) (0xc000b20820) Stream removed, broadcasting: 1\nI0126 21:52:24.408151    3131 log.go:172] (0xc000a920b0) (0xc000b208c0) Stream removed, broadcasting: 3\nI0126 21:52:24.408163    3131 log.go:172] (0xc000a920b0) (0xc00062dc20) Stream removed, broadcasting: 5\n"
Jan 26 21:52:24.418: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 26 21:52:24.418: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 26 21:52:24.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 26 21:52:24.976: INFO: stderr: "I0126 21:52:24.717755    3153 log.go:172] (0xc0000f4370) (0xc00096a140) Create stream\nI0126 21:52:24.718365    3153 log.go:172] (0xc0000f4370) (0xc00096a140) Stream added, broadcasting: 1\nI0126 21:52:24.726834    3153 log.go:172] (0xc0000f4370) Reply frame received for 1\nI0126 21:52:24.726935    3153 log.go:172] (0xc0000f4370) (0xc0008da000) Create stream\nI0126 21:52:24.726956    3153 log.go:172] (0xc0000f4370) (0xc0008da000) Stream added, broadcasting: 3\nI0126 21:52:24.730008    3153 log.go:172] (0xc0000f4370) Reply frame received for 3\nI0126 21:52:24.730200    3153 log.go:172] (0xc0000f4370) (0xc0008da0a0) Create stream\nI0126 21:52:24.730224    3153 log.go:172] (0xc0000f4370) (0xc0008da0a0) Stream added, broadcasting: 5\nI0126 21:52:24.731655    3153 log.go:172] (0xc0000f4370) Reply frame received for 5\nI0126 21:52:24.815181    3153 log.go:172] (0xc0000f4370) Data frame received for 5\nI0126 21:52:24.815257    3153 log.go:172] (0xc0008da0a0) (5) Data frame handling\nI0126 21:52:24.815276    3153 log.go:172] (0xc0008da0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0126 21:52:24.872677    3153 log.go:172] (0xc0000f4370) Data frame received for 3\nI0126 21:52:24.872813    3153 log.go:172] (0xc0008da000) (3) Data frame handling\nI0126 21:52:24.872851    3153 log.go:172] (0xc0008da000) (3) Data frame sent\nI0126 21:52:24.964243    3153 log.go:172] (0xc0000f4370) Data frame received for 1\nI0126 21:52:24.964527    3153 log.go:172] (0xc00096a140) (1) Data frame handling\nI0126 21:52:24.964599    3153 log.go:172] (0xc00096a140) (1) Data frame sent\nI0126 21:52:24.965335    3153 log.go:172] (0xc0000f4370) (0xc0008da000) Stream removed, broadcasting: 3\nI0126 21:52:24.965446    3153 log.go:172] (0xc0000f4370) (0xc00096a140) Stream removed, broadcasting: 1\nI0126 21:52:24.965769    3153 log.go:172] (0xc0000f4370) (0xc0008da0a0) Stream removed, broadcasting: 5\nI0126 21:52:24.965883    3153 log.go:172] (0xc0000f4370) Go away received\nI0126 21:52:24.967130    3153 log.go:172] (0xc0000f4370) (0xc00096a140) Stream removed, broadcasting: 1\nI0126 21:52:24.967148    3153 log.go:172] (0xc0000f4370) (0xc0008da000) Stream removed, broadcasting: 3\nI0126 21:52:24.967167    3153 log.go:172] (0xc0000f4370) (0xc0008da0a0) Stream removed, broadcasting: 5\n"
Jan 26 21:52:24.977: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 26 21:52:24.977: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 26 21:52:24.977: INFO: Waiting for statefulset status.replicas updated to 0
Jan 26 21:52:24.982: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Jan 26 21:52:34.994: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 26 21:52:34.994: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 26 21:52:34.994: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 26 21:52:35.034: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 26 21:52:35.034: INFO: ss-0  jerma-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:51:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:51:39 +0000 UTC  }]
Jan 26 21:52:35.034: INFO: ss-1  jerma-server-mvvl6gufaqub  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:01 +0000 UTC  }]
Jan 26 21:52:35.034: INFO: ss-2  jerma-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:01 +0000 UTC  }]
Jan 26 21:52:35.034: INFO: 
Jan 26 21:52:35.034: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 26 21:52:36.808: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 26 21:52:36.808: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:51:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:51:39 +0000 UTC  }]
Jan 26 21:52:36.808: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:01 +0000 UTC  }]
Jan 26 21:52:36.809: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:01 +0000 UTC  }]
Jan 26 21:52:36.809: INFO: 
Jan 26 21:52:36.809: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 26 21:52:37.820: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 26 21:52:37.821: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:51:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:51:39 +0000 UTC  }]
Jan 26 21:52:37.821: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:01 +0000 UTC  }]
Jan 26 21:52:37.821: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:01 +0000 UTC  }]
Jan 26 21:52:37.821: INFO: 
Jan 26 21:52:37.821: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 26 21:52:38.836: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 26 21:52:38.836: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:51:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:51:39 +0000 UTC  }]
Jan 26 21:52:38.836: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:01 +0000 UTC  }]
Jan 26 21:52:38.836: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:01 +0000 UTC  }]
Jan 26 21:52:38.836: INFO: 
Jan 26 21:52:38.836: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 26 21:52:39.848: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 26 21:52:39.848: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:51:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:51:39 +0000 UTC  }]
Jan 26 21:52:39.848: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:01 +0000 UTC  }]
Jan 26 21:52:39.849: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:01 +0000 UTC  }]
Jan 26 21:52:39.849: INFO: 
Jan 26 21:52:39.849: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 26 21:52:40.856: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 26 21:52:40.856: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:51:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:51:39 +0000 UTC  }]
Jan 26 21:52:40.856: INFO: ss-1  jerma-server-mvvl6gufaqub  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:01 +0000 UTC  }]
Jan 26 21:52:40.856: INFO: ss-2  jerma-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:01 +0000 UTC  }]
Jan 26 21:52:40.856: INFO: 
Jan 26 21:52:40.856: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 26 21:52:41.867: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan 26 21:52:41.867: INFO: ss-0  jerma-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:51:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:51:39 +0000 UTC  }]
Jan 26 21:52:41.867: INFO: ss-2  jerma-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:01 +0000 UTC  }]
Jan 26 21:52:41.868: INFO: 
Jan 26 21:52:41.868: INFO: StatefulSet ss has not reached scale 0, at 2
Jan 26 21:52:42.877: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan 26 21:52:42.877: INFO: ss-0  jerma-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:51:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:51:39 +0000 UTC  }]
Jan 26 21:52:42.877: INFO: ss-2  jerma-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:01 +0000 UTC  }]
Jan 26 21:52:42.877: INFO: 
Jan 26 21:52:42.877: INFO: StatefulSet ss has not reached scale 0, at 2
Jan 26 21:52:43.888: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan 26 21:52:43.888: INFO: ss-0  jerma-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:51:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:51:39 +0000 UTC  }]
Jan 26 21:52:43.888: INFO: ss-2  jerma-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:01 +0000 UTC  }]
Jan 26 21:52:43.889: INFO: 
Jan 26 21:52:43.889: INFO: StatefulSet ss has not reached scale 0, at 2
Jan 26 21:52:44.897: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan 26 21:52:44.898: INFO: ss-0  jerma-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:51:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:51:39 +0000 UTC  }]
Jan 26 21:52:44.898: INFO: ss-2  jerma-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 21:52:01 +0000 UTC  }]
Jan 26 21:52:44.898: INFO: 
Jan 26 21:52:44.898: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7028
Jan 26 21:52:45.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 26 21:52:46.147: INFO: rc: 1
Jan 26 21:52:46.148: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Jan 26 21:52:56.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 26 21:52:56.313: INFO: rc: 1
Jan 26 21:52:56.313: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 21:53:06.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 26 21:53:06.572: INFO: rc: 1
Jan 26 21:53:06.572: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 21:53:16.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 26 21:53:16.691: INFO: rc: 1
Jan 26 21:53:16.691: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 21:53:26.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 26 21:53:26.832: INFO: rc: 1
Jan 26 21:53:26.832: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 21:53:36.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 26 21:53:36.962: INFO: rc: 1
Jan 26 21:53:36.962: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 21:53:46.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 26 21:53:47.159: INFO: rc: 1
Jan 26 21:53:47.159: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 21:53:57.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 26 21:53:57.333: INFO: rc: 1
Jan 26 21:53:57.334: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 21:54:07.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 26 21:54:07.489: INFO: rc: 1
Jan 26 21:54:07.489: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 21:54:17.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 26 21:54:17.668: INFO: rc: 1
Jan 26 21:54:17.668: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 21:54:27.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 26 21:54:27.871: INFO: rc: 1
Jan 26 21:54:27.872: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 21:54:37.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 26 21:54:38.039: INFO: rc: 1
Jan 26 21:54:38.039: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 21:54:48.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 26 21:54:48.152: INFO: rc: 1
Jan 26 21:54:48.153: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 21:54:58.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 26 21:54:58.381: INFO: rc: 1
Jan 26 21:54:58.382: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 21:55:08.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 26 21:55:08.594: INFO: rc: 1
Jan 26 21:55:08.595: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 21:55:18.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 26 21:55:18.737: INFO: rc: 1
Jan 26 21:55:18.737: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 21:55:28.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 26 21:55:28.845: INFO: rc: 1
Jan 26 21:55:28.845: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 21:55:38.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 26 21:55:39.136: INFO: rc: 1
Jan 26 21:55:39.136: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 21:55:49.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 26 21:55:49.288: INFO: rc: 1
Jan 26 21:55:49.289: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 21:55:59.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 26 21:55:59.430: INFO: rc: 1
Jan 26 21:55:59.430: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 21:56:09.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 26 21:56:09.651: INFO: rc: 1
Jan 26 21:56:09.651: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 21:56:19.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 26 21:56:19.835: INFO: rc: 1
Jan 26 21:56:19.836: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 21:56:29.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 26 21:56:30.077: INFO: rc: 1
Jan 26 21:56:30.077: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 21:56:40.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 26 21:56:40.332: INFO: rc: 1
Jan 26 21:56:40.333: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 21:56:50.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 26 21:56:50.546: INFO: rc: 1
Jan 26 21:56:50.546: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 21:57:00.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 26 21:57:00.690: INFO: rc: 1
Jan 26 21:57:00.690: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 21:57:10.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 26 21:57:10.896: INFO: rc: 1
Jan 26 21:57:10.896: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 21:57:20.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 26 21:57:21.052: INFO: rc: 1
Jan 26 21:57:21.052: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 21:57:31.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 26 21:57:31.245: INFO: rc: 1
Jan 26 21:57:31.246: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 21:57:41.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 26 21:57:41.421: INFO: rc: 1
Jan 26 21:57:41.422: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 21:57:51.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7028 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 26 21:57:51.585: INFO: rc: 1
Jan 26 21:57:51.586: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: 
Jan 26 21:57:51.586: INFO: Scaling statefulset ss to 0
Jan 26 21:57:51.597: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jan 26 21:57:51.599: INFO: Deleting all statefulset in ns statefulset-7028
Jan 26 21:57:51.604: INFO: Scaling statefulset ss to 0
Jan 26 21:57:51.610: INFO: Waiting for statefulset status.replicas updated to 0
Jan 26 21:57:51.612: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:57:51.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7028" for this suite.

• [SLOW TEST:372.546 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":142,"skipped":2313,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:57:51.645: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:58:25.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-838" for this suite.
STEP: Destroying namespace "nsdeletetest-7688" for this suite.
Jan 26 21:58:25.115: INFO: Namespace nsdeletetest-7688 was already deleted
STEP: Destroying namespace "nsdeletetest-2674" for this suite.

• [SLOW TEST:33.477 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":143,"skipped":2334,"failed":0}
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:58:25.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 26 21:58:41.437: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 21:58:41.444: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 21:58:43.445: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 21:58:43.452: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 21:58:45.445: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 21:58:45.452: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 21:58:47.445: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 21:58:47.453: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 21:58:49.445: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 21:58:49.452: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 21:58:51.445: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 21:58:51.463: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 21:58:53.445: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 21:58:53.452: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:58:53.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-6498" for this suite.

• [SLOW TEST:28.354 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":144,"skipped":2334,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:58:53.478: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-map-fb3a9bed-8e19-48c0-ba0f-101fb2bf600f
STEP: Creating a pod to test consume secrets
Jan 26 21:58:53.647: INFO: Waiting up to 5m0s for pod "pod-secrets-80bc4971-9620-4d8a-9197-3feaaaf40401" in namespace "secrets-3331" to be "success or failure"
Jan 26 21:58:53.659: INFO: Pod "pod-secrets-80bc4971-9620-4d8a-9197-3feaaaf40401": Phase="Pending", Reason="", readiness=false. Elapsed: 11.846378ms
Jan 26 21:58:55.667: INFO: Pod "pod-secrets-80bc4971-9620-4d8a-9197-3feaaaf40401": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019237044s
Jan 26 21:58:57.674: INFO: Pod "pod-secrets-80bc4971-9620-4d8a-9197-3feaaaf40401": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027140668s
Jan 26 21:58:59.680: INFO: Pod "pod-secrets-80bc4971-9620-4d8a-9197-3feaaaf40401": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032811651s
Jan 26 21:59:01.789: INFO: Pod "pod-secrets-80bc4971-9620-4d8a-9197-3feaaaf40401": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.141651618s
STEP: Saw pod success
Jan 26 21:59:01.789: INFO: Pod "pod-secrets-80bc4971-9620-4d8a-9197-3feaaaf40401" satisfied condition "success or failure"
Jan 26 21:59:01.797: INFO: Trying to get logs from node jerma-node pod pod-secrets-80bc4971-9620-4d8a-9197-3feaaaf40401 container secret-volume-test: 
STEP: delete the pod
Jan 26 21:59:01.876: INFO: Waiting for pod pod-secrets-80bc4971-9620-4d8a-9197-3feaaaf40401 to disappear
Jan 26 21:59:01.885: INFO: Pod pod-secrets-80bc4971-9620-4d8a-9197-3feaaaf40401 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:59:01.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3331" for this suite.

• [SLOW TEST:8.459 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":145,"skipped":2352,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:59:01.938: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Jan 26 21:59:02.149: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:59:20.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2871" for this suite.

• [SLOW TEST:18.876 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":146,"skipped":2370,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:59:20.815: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 26 21:59:20.888: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jan 26 21:59:25.979: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 26 21:59:29.992: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jan 26 21:59:32.005: INFO: Creating deployment "test-rollover-deployment"
Jan 26 21:59:32.029: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jan 26 21:59:34.042: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jan 26 21:59:34.051: INFO: Ensure that both replica sets have 1 created replica
Jan 26 21:59:34.058: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jan 26 21:59:34.067: INFO: Updating deployment test-rollover-deployment
Jan 26 21:59:34.067: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jan 26 21:59:36.086: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jan 26 21:59:36.098: INFO: Make sure deployment "test-rollover-deployment" is complete
Jan 26 21:59:36.109: INFO: all replica sets need to contain the pod-template-hash label
Jan 26 21:59:36.109: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672772, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672772, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672774, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672772, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 21:59:38.121: INFO: all replica sets need to contain the pod-template-hash label
Jan 26 21:59:38.121: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672772, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672772, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672774, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672772, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 21:59:40.123: INFO: all replica sets need to contain the pod-template-hash label
Jan 26 21:59:40.124: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672772, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672772, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672774, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672772, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 21:59:42.123: INFO: all replica sets need to contain the pod-template-hash label
Jan 26 21:59:42.124: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672772, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672772, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672781, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672772, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 21:59:44.121: INFO: all replica sets need to contain the pod-template-hash label
Jan 26 21:59:44.122: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672772, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672772, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672781, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672772, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 21:59:46.127: INFO: all replica sets need to contain the pod-template-hash label
Jan 26 21:59:46.127: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672772, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672772, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672781, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672772, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 21:59:48.143: INFO: all replica sets need to contain the pod-template-hash label
Jan 26 21:59:48.144: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672772, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672772, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672781, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672772, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 21:59:50.120: INFO: all replica sets need to contain the pod-template-hash label
Jan 26 21:59:50.120: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672772, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672772, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672781, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672772, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 21:59:52.122: INFO: 
Jan 26 21:59:52.122: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Jan 26 21:59:52.135: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-1932 /apis/apps/v1/namespaces/deployment-1932/deployments/test-rollover-deployment 23cd9814-68df-4782-ba6b-b6ce3322fd47 4547206 2 2020-01-26 21:59:32 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002c6bc98  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-01-26 21:59:32 +0000 UTC,LastTransitionTime:2020-01-26 21:59:32 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-01-26 21:59:51 +0000 UTC,LastTransitionTime:2020-01-26 21:59:32 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Jan 26 21:59:52.140: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff  deployment-1932 /apis/apps/v1/namespaces/deployment-1932/replicasets/test-rollover-deployment-574d6dfbff c04ea59d-70eb-410e-9aad-b691044c1b0b 4547194 2 2020-01-26 21:59:34 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 23cd9814-68df-4782-ba6b-b6ce3322fd47 0xc002d4bec7 0xc002d4bec8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002d4bf38  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Jan 26 21:59:52.140: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jan 26 21:59:52.141: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-1932 /apis/apps/v1/namespaces/deployment-1932/replicasets/test-rollover-controller d2c835f9-1d2b-4d07-84cd-d082f87436bb 4547203 2 2020-01-26 21:59:20 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 23cd9814-68df-4782-ba6b-b6ce3322fd47 0xc002d4bdf7 0xc002d4bdf8}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002d4be58  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 26 21:59:52.141: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c  deployment-1932 /apis/apps/v1/namespaces/deployment-1932/replicasets/test-rollover-deployment-f6c94f66c 77f6bc48-9034-4d1b-a434-55dc12208e23 4547143 2 2020-01-26 21:59:32 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 23cd9814-68df-4782-ba6b-b6ce3322fd47 0xc002d4bfa0 0xc002d4bfa1}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002ba6018  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 26 21:59:52.145: INFO: Pod "test-rollover-deployment-574d6dfbff-7jc6r" is available:
&Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-7jc6r test-rollover-deployment-574d6dfbff- deployment-1932 /api/v1/namespaces/deployment-1932/pods/test-rollover-deployment-574d6dfbff-7jc6r 3b2e52a7-0a5e-4bf6-ab69-c22926c5e533 4547168 0 2020-01-26 21:59:34 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff c04ea59d-70eb-410e-9aad-b691044c1b0b 0xc002ba6647 0xc002ba6648}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s2mz2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s2mz2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s2mz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 21:59:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 21:59:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 21:59:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 21:59:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-01-26 21:59:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-26 21:59:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://3b583dac8ab8992823a49d0e8804f60f6972fa8f2ec38139f568a8dc0224a998,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 21:59:52.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1932" for this suite.

• [SLOW TEST:31.350 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":147,"skipped":2386,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 21:59:52.166: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-4414594a-a810-4f1e-ae9c-cd4c336eb0ff
STEP: Creating a pod to test consume configMaps
Jan 26 21:59:52.457: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-56c107a1-ede3-4d42-85a3-cbbb6ce9339b" in namespace "projected-9577" to be "success or failure"
Jan 26 21:59:52.482: INFO: Pod "pod-projected-configmaps-56c107a1-ede3-4d42-85a3-cbbb6ce9339b": Phase="Pending", Reason="", readiness=false. Elapsed: 25.383928ms
Jan 26 21:59:54.519: INFO: Pod "pod-projected-configmaps-56c107a1-ede3-4d42-85a3-cbbb6ce9339b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06200736s
Jan 26 21:59:56.531: INFO: Pod "pod-projected-configmaps-56c107a1-ede3-4d42-85a3-cbbb6ce9339b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074355309s
Jan 26 21:59:58.540: INFO: Pod "pod-projected-configmaps-56c107a1-ede3-4d42-85a3-cbbb6ce9339b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083498761s
Jan 26 22:00:00.552: INFO: Pod "pod-projected-configmaps-56c107a1-ede3-4d42-85a3-cbbb6ce9339b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.095478045s
Jan 26 22:00:02.567: INFO: Pod "pod-projected-configmaps-56c107a1-ede3-4d42-85a3-cbbb6ce9339b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.110507394s
Jan 26 22:00:04.579: INFO: Pod "pod-projected-configmaps-56c107a1-ede3-4d42-85a3-cbbb6ce9339b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.122343276s
STEP: Saw pod success
Jan 26 22:00:04.579: INFO: Pod "pod-projected-configmaps-56c107a1-ede3-4d42-85a3-cbbb6ce9339b" satisfied condition "success or failure"
Jan 26 22:00:04.584: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-56c107a1-ede3-4d42-85a3-cbbb6ce9339b container projected-configmap-volume-test: 
STEP: delete the pod
Jan 26 22:00:04.829: INFO: Waiting for pod pod-projected-configmaps-56c107a1-ede3-4d42-85a3-cbbb6ce9339b to disappear
Jan 26 22:00:04.836: INFO: Pod pod-projected-configmaps-56c107a1-ede3-4d42-85a3-cbbb6ce9339b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:00:04.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9577" for this suite.

• [SLOW TEST:12.695 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":148,"skipped":2392,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:00:04.862: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1789.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-1789.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1789.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1789.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-1789.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1789.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 26 22:00:15.138: INFO: DNS probes using dns-1789/dns-test-bc939017-bd03-4160-827f-7493d183804c succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:00:15.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1789" for this suite.

• [SLOW TEST:10.376 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":149,"skipped":2403,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:00:15.240: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 26 22:00:15.379: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dff74bf4-3689-4614-8f09-0930b54c14f3" in namespace "projected-5084" to be "success or failure"
Jan 26 22:00:15.387: INFO: Pod "downwardapi-volume-dff74bf4-3689-4614-8f09-0930b54c14f3": Phase="Pending", Reason="", readiness=false. Elapsed: 7.677497ms
Jan 26 22:00:17.392: INFO: Pod "downwardapi-volume-dff74bf4-3689-4614-8f09-0930b54c14f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012428411s
Jan 26 22:00:19.403: INFO: Pod "downwardapi-volume-dff74bf4-3689-4614-8f09-0930b54c14f3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023475586s
Jan 26 22:00:21.410: INFO: Pod "downwardapi-volume-dff74bf4-3689-4614-8f09-0930b54c14f3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030857072s
Jan 26 22:00:23.421: INFO: Pod "downwardapi-volume-dff74bf4-3689-4614-8f09-0930b54c14f3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.042083015s
Jan 26 22:00:25.428: INFO: Pod "downwardapi-volume-dff74bf4-3689-4614-8f09-0930b54c14f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.048855114s
STEP: Saw pod success
Jan 26 22:00:25.428: INFO: Pod "downwardapi-volume-dff74bf4-3689-4614-8f09-0930b54c14f3" satisfied condition "success or failure"
Jan 26 22:00:25.432: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-dff74bf4-3689-4614-8f09-0930b54c14f3 container client-container: 
STEP: delete the pod
Jan 26 22:00:25.460: INFO: Waiting for pod downwardapi-volume-dff74bf4-3689-4614-8f09-0930b54c14f3 to disappear
Jan 26 22:00:25.604: INFO: Pod downwardapi-volume-dff74bf4-3689-4614-8f09-0930b54c14f3 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:00:25.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5084" for this suite.

• [SLOW TEST:10.383 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":150,"skipped":2413,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:00:25.624: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override arguments
Jan 26 22:00:25.730: INFO: Waiting up to 5m0s for pod "client-containers-5ee3f24d-1adf-4816-83ff-182d8917587a" in namespace "containers-3022" to be "success or failure"
Jan 26 22:00:25.745: INFO: Pod "client-containers-5ee3f24d-1adf-4816-83ff-182d8917587a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.51582ms
Jan 26 22:00:27.752: INFO: Pod "client-containers-5ee3f24d-1adf-4816-83ff-182d8917587a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022203691s
Jan 26 22:00:29.761: INFO: Pod "client-containers-5ee3f24d-1adf-4816-83ff-182d8917587a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031108155s
Jan 26 22:00:32.325: INFO: Pod "client-containers-5ee3f24d-1adf-4816-83ff-182d8917587a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.594841763s
Jan 26 22:00:34.333: INFO: Pod "client-containers-5ee3f24d-1adf-4816-83ff-182d8917587a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.60262841s
STEP: Saw pod success
Jan 26 22:00:34.333: INFO: Pod "client-containers-5ee3f24d-1adf-4816-83ff-182d8917587a" satisfied condition "success or failure"
Jan 26 22:00:34.336: INFO: Trying to get logs from node jerma-node pod client-containers-5ee3f24d-1adf-4816-83ff-182d8917587a container test-container: 
STEP: delete the pod
Jan 26 22:00:34.374: INFO: Waiting for pod client-containers-5ee3f24d-1adf-4816-83ff-182d8917587a to disappear
Jan 26 22:00:34.402: INFO: Pod client-containers-5ee3f24d-1adf-4816-83ff-182d8917587a no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:00:34.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3022" for this suite.

• [SLOW TEST:8.790 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":151,"skipped":2429,"failed":0}
SS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:00:34.415: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 26 22:00:41.235: INFO: Successfully updated pod "pod-update-63607cb3-bb20-4b4b-91c4-aaf662ae8851"
STEP: verifying the updated pod is in kubernetes
Jan 26 22:00:41.312: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:00:41.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1701" for this suite.

• [SLOW TEST:6.915 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":152,"skipped":2431,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:00:41.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 26 22:00:41.395: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jan 26 22:00:41.482: INFO: Pod name sample-pod: Found 0 pods out of 1
Jan 26 22:00:46.509: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 26 22:00:50.527: INFO: Creating deployment "test-rolling-update-deployment"
Jan 26 22:00:50.537: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jan 26 22:00:50.598: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Jan 26 22:00:52.612: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jan 26 22:00:52.616: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672850, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672850, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672850, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672850, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 22:00:54.624: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672850, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672850, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672850, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672850, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 22:00:56.637: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672850, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672850, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672850, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672850, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 22:00:58.625: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Jan 26 22:00:58.640: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-149 /apis/apps/v1/namespaces/deployment-149/deployments/test-rolling-update-deployment 3d3e176a-b37a-459a-885d-d2f3b10931fe 4547568 1 2020-01-26 22:00:50 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002985938  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-01-26 22:00:50 +0000 UTC,LastTransitionTime:2020-01-26 22:00:50 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-01-26 22:00:56 +0000 UTC,LastTransitionTime:2020-01-26 22:00:50 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Jan 26 22:00:58.646: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444  deployment-149 /apis/apps/v1/namespaces/deployment-149/replicasets/test-rolling-update-deployment-67cf4f6444 b3bd8c7c-8363-4ee8-98e7-f62aa055fe5e 4547557 1 2020-01-26 22:00:50 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 3d3e176a-b37a-459a-885d-d2f3b10931fe 0xc002985e57 0xc002985e58}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002985ed8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Jan 26 22:00:58.647: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jan 26 22:00:58.647: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-149 /apis/apps/v1/namespaces/deployment-149/replicasets/test-rolling-update-controller 2e5dd69e-77b0-40fa-a064-0dd9badf6962 4547566 2 2020-01-26 22:00:41 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 3d3e176a-b37a-459a-885d-d2f3b10931fe 0xc002985cef 0xc002985d00}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002985dd8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 26 22:00:58.653: INFO: Pod "test-rolling-update-deployment-67cf4f6444-94snp" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-94snp test-rolling-update-deployment-67cf4f6444- deployment-149 /api/v1/namespaces/deployment-149/pods/test-rolling-update-deployment-67cf4f6444-94snp d5b9654d-b025-46cc-942d-8ade010cc588 4547556 0 2020-01-26 22:00:50 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 b3bd8c7c-8363-4ee8-98e7-f62aa055fe5e 0xc0028cc377 0xc0028cc378}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7n9zr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7n9zr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7n9zr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:00:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:00:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:00:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:00:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-01-26 22:00:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-26 22:00:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://e94d28caec75e7de9cd06cd4e242494b1c62f709e8cd6e5a0d3581b0a16f760c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:00:58.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-149" for this suite.

• [SLOW TEST:17.337 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":153,"skipped":2439,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:00:58.669: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:01:05.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-5831" for this suite.
STEP: Destroying namespace "nsdeletetest-8736" for this suite.
Jan 26 22:01:05.423: INFO: Namespace nsdeletetest-8736 was already deleted
STEP: Destroying namespace "nsdeletetest-7938" for this suite.

• [SLOW TEST:6.792 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":154,"skipped":2458,"failed":0}
SSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:01:05.462: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 26 22:01:06.829: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 26 22:01:08.846: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672866, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672866, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672866, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672866, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 22:01:10.857: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672866, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672866, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672866, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672866, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 22:01:12.863: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672866, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672866, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672866, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715672866, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 26 22:01:16.038: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 26 22:01:16.044: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2511-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:01:17.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5535" for this suite.
STEP: Destroying namespace "webhook-5535-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.349 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":155,"skipped":2462,"failed":0}
S
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:01:17.812: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating pod
Jan 26 22:01:26.096: INFO: Pod pod-hostip-b1db3d7f-7318-4b50-93d5-05f417d248bb has hostIP: 10.96.2.250
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:01:26.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4550" for this suite.

• [SLOW TEST:8.298 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":156,"skipped":2463,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:01:26.112: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Jan 26 22:01:26.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1341'
Jan 26 22:01:26.871: INFO: stderr: ""
Jan 26 22:01:26.871: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Jan 26 22:01:27.881: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 26 22:01:27.882: INFO: Found 0 / 1
Jan 26 22:01:28.880: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 26 22:01:28.880: INFO: Found 0 / 1
Jan 26 22:01:29.878: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 26 22:01:29.878: INFO: Found 0 / 1
Jan 26 22:01:30.886: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 26 22:01:30.886: INFO: Found 0 / 1
Jan 26 22:01:31.882: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 26 22:01:31.883: INFO: Found 0 / 1
Jan 26 22:01:32.879: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 26 22:01:32.879: INFO: Found 0 / 1
Jan 26 22:01:34.389: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 26 22:01:34.389: INFO: Found 0 / 1
Jan 26 22:01:34.883: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 26 22:01:34.883: INFO: Found 0 / 1
Jan 26 22:01:35.884: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 26 22:01:35.884: INFO: Found 1 / 1
Jan 26 22:01:35.884: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jan 26 22:01:35.890: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 26 22:01:35.890: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 26 22:01:35.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-r58wg --namespace=kubectl-1341 -p {"metadata":{"annotations":{"x":"y"}}}'
Jan 26 22:01:36.087: INFO: stderr: ""
Jan 26 22:01:36.087: INFO: stdout: "pod/agnhost-master-r58wg patched\n"
STEP: checking annotations
Jan 26 22:01:36.134: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 26 22:01:36.134: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:01:36.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1341" for this suite.

• [SLOW TEST:10.035 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1519
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":278,"completed":157,"skipped":2498,"failed":0}
SS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:01:36.148: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:01:52.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6336" for this suite.

• [SLOW TEST:16.254 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":158,"skipped":2500,"failed":0}
SSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:01:52.402: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-configmap-ngdf
STEP: Creating a pod to test atomic-volume-subpath
Jan 26 22:01:52.547: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-ngdf" in namespace "subpath-5611" to be "success or failure"
Jan 26 22:01:52.564: INFO: Pod "pod-subpath-test-configmap-ngdf": Phase="Pending", Reason="", readiness=false. Elapsed: 16.859082ms
Jan 26 22:01:54.576: INFO: Pod "pod-subpath-test-configmap-ngdf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028880793s
Jan 26 22:01:56.589: INFO: Pod "pod-subpath-test-configmap-ngdf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041357855s
Jan 26 22:01:58.600: INFO: Pod "pod-subpath-test-configmap-ngdf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053034447s
Jan 26 22:02:00.617: INFO: Pod "pod-subpath-test-configmap-ngdf": Phase="Running", Reason="", readiness=true. Elapsed: 8.070316299s
Jan 26 22:02:02.624: INFO: Pod "pod-subpath-test-configmap-ngdf": Phase="Running", Reason="", readiness=true. Elapsed: 10.077296781s
Jan 26 22:02:04.632: INFO: Pod "pod-subpath-test-configmap-ngdf": Phase="Running", Reason="", readiness=true. Elapsed: 12.084888462s
Jan 26 22:02:06.640: INFO: Pod "pod-subpath-test-configmap-ngdf": Phase="Running", Reason="", readiness=true. Elapsed: 14.092795873s
Jan 26 22:02:08.649: INFO: Pod "pod-subpath-test-configmap-ngdf": Phase="Running", Reason="", readiness=true. Elapsed: 16.101683166s
Jan 26 22:02:10.655: INFO: Pod "pod-subpath-test-configmap-ngdf": Phase="Running", Reason="", readiness=true. Elapsed: 18.108036019s
Jan 26 22:02:12.665: INFO: Pod "pod-subpath-test-configmap-ngdf": Phase="Running", Reason="", readiness=true. Elapsed: 20.118012959s
Jan 26 22:02:14.676: INFO: Pod "pod-subpath-test-configmap-ngdf": Phase="Running", Reason="", readiness=true. Elapsed: 22.128998443s
Jan 26 22:02:16.691: INFO: Pod "pod-subpath-test-configmap-ngdf": Phase="Running", Reason="", readiness=true. Elapsed: 24.143405846s
Jan 26 22:02:18.699: INFO: Pod "pod-subpath-test-configmap-ngdf": Phase="Running", Reason="", readiness=true. Elapsed: 26.151926183s
Jan 26 22:02:20.707: INFO: Pod "pod-subpath-test-configmap-ngdf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.159785708s
STEP: Saw pod success
Jan 26 22:02:20.707: INFO: Pod "pod-subpath-test-configmap-ngdf" satisfied condition "success or failure"
Jan 26 22:02:20.713: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-ngdf container test-container-subpath-configmap-ngdf: 
STEP: delete the pod
Jan 26 22:02:20.750: INFO: Waiting for pod pod-subpath-test-configmap-ngdf to disappear
Jan 26 22:02:20.753: INFO: Pod pod-subpath-test-configmap-ngdf no longer exists
STEP: Deleting pod pod-subpath-test-configmap-ngdf
Jan 26 22:02:20.753: INFO: Deleting pod "pod-subpath-test-configmap-ngdf" in namespace "subpath-5611"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:02:20.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5611" for this suite.

• [SLOW TEST:28.363 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":159,"skipped":2503,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:02:20.768: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:02:20.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7912" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":278,"completed":160,"skipped":2570,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:02:20.877: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 26 22:02:20.983: INFO: Waiting up to 5m0s for pod "pod-f95f93b5-5ccd-4f29-b3ef-e443e0f6a269" in namespace "emptydir-6900" to be "success or failure"
Jan 26 22:02:21.028: INFO: Pod "pod-f95f93b5-5ccd-4f29-b3ef-e443e0f6a269": Phase="Pending", Reason="", readiness=false. Elapsed: 44.712485ms
Jan 26 22:02:23.040: INFO: Pod "pod-f95f93b5-5ccd-4f29-b3ef-e443e0f6a269": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056785279s
Jan 26 22:02:25.046: INFO: Pod "pod-f95f93b5-5ccd-4f29-b3ef-e443e0f6a269": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062729627s
Jan 26 22:02:27.056: INFO: Pod "pod-f95f93b5-5ccd-4f29-b3ef-e443e0f6a269": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073432914s
Jan 26 22:02:29.063: INFO: Pod "pod-f95f93b5-5ccd-4f29-b3ef-e443e0f6a269": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.079889083s
STEP: Saw pod success
Jan 26 22:02:29.063: INFO: Pod "pod-f95f93b5-5ccd-4f29-b3ef-e443e0f6a269" satisfied condition "success or failure"
Jan 26 22:02:29.066: INFO: Trying to get logs from node jerma-node pod pod-f95f93b5-5ccd-4f29-b3ef-e443e0f6a269 container test-container: 
STEP: delete the pod
Jan 26 22:02:29.098: INFO: Waiting for pod pod-f95f93b5-5ccd-4f29-b3ef-e443e0f6a269 to disappear
Jan 26 22:02:29.103: INFO: Pod pod-f95f93b5-5ccd-4f29-b3ef-e443e0f6a269 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:02:29.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6900" for this suite.

• [SLOW TEST:8.237 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":161,"skipped":2588,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:02:29.114: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jan 26 22:02:37.249: INFO: &Pod{ObjectMeta:{send-events-79314c8e-32f2-4577-92b5-c4fcd63314ed  events-8854 /api/v1/namespaces/events-8854/pods/send-events-79314c8e-32f2-4577-92b5-c4fcd63314ed 41383edb-c092-4e73-a781-023382c8337e 4548082 0 2020-01-26 22:02:29 +0000 UTC   map[name:foo time:211589290] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zqtpg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zqtpg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zqtpg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:02:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:02:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:02:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:02:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-01-26 22:02:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-26 22:02:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://23733560e001f0258a39431334d12c2fefb8055ae8343a625833075a2dc5d213,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Jan 26 22:02:39.258: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jan 26 22:02:41.268: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:02:41.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-8854" for this suite.

• [SLOW TEST:12.187 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":278,"completed":162,"skipped":2602,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:02:41.303: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-b7449c53-fcf7-4642-9a36-b16f30623292
STEP: Creating a pod to test consume configMaps
Jan 26 22:02:41.484: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cca44a29-ca7d-41e1-85ac-6653a1132140" in namespace "projected-7657" to be "success or failure"
Jan 26 22:02:41.528: INFO: Pod "pod-projected-configmaps-cca44a29-ca7d-41e1-85ac-6653a1132140": Phase="Pending", Reason="", readiness=false. Elapsed: 43.511306ms
Jan 26 22:02:43.537: INFO: Pod "pod-projected-configmaps-cca44a29-ca7d-41e1-85ac-6653a1132140": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053243487s
Jan 26 22:02:45.544: INFO: Pod "pod-projected-configmaps-cca44a29-ca7d-41e1-85ac-6653a1132140": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060098877s
Jan 26 22:02:47.552: INFO: Pod "pod-projected-configmaps-cca44a29-ca7d-41e1-85ac-6653a1132140": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06781613s
Jan 26 22:02:49.561: INFO: Pod "pod-projected-configmaps-cca44a29-ca7d-41e1-85ac-6653a1132140": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.077023556s
STEP: Saw pod success
Jan 26 22:02:49.561: INFO: Pod "pod-projected-configmaps-cca44a29-ca7d-41e1-85ac-6653a1132140" satisfied condition "success or failure"
Jan 26 22:02:49.565: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-cca44a29-ca7d-41e1-85ac-6653a1132140 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 26 22:02:49.656: INFO: Waiting for pod pod-projected-configmaps-cca44a29-ca7d-41e1-85ac-6653a1132140 to disappear
Jan 26 22:02:49.662: INFO: Pod pod-projected-configmaps-cca44a29-ca7d-41e1-85ac-6653a1132140 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:02:49.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7657" for this suite.

• [SLOW TEST:8.367 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":163,"skipped":2625,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:02:49.672: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Jan 26 22:02:49.745: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 26 22:02:49.840: INFO: Waiting for terminating namespaces to be deleted...
Jan 26 22:02:49.843: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan 26 22:02:49.855: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan 26 22:02:49.855: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 26 22:02:49.855: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan 26 22:02:49.855: INFO: 	Container weave ready: true, restart count 1
Jan 26 22:02:49.855: INFO: 	Container weave-npc ready: true, restart count 0
Jan 26 22:02:49.855: INFO: send-events-79314c8e-32f2-4577-92b5-c4fcd63314ed from events-8854 started at 2020-01-26 22:02:29 +0000 UTC (1 container statuses recorded)
Jan 26 22:02:49.855: INFO: 	Container p ready: true, restart count 0
Jan 26 22:02:49.855: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan 26 22:02:49.899: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 26 22:02:49.899: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan 26 22:02:49.899: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 26 22:02:49.899: INFO: 	Container etcd ready: true, restart count 1
Jan 26 22:02:49.899: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 26 22:02:49.899: INFO: 	Container coredns ready: true, restart count 0
Jan 26 22:02:49.899: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 26 22:02:49.899: INFO: 	Container coredns ready: true, restart count 0
Jan 26 22:02:49.899: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 26 22:02:49.899: INFO: 	Container kube-controller-manager ready: true, restart count 3
Jan 26 22:02:49.899: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan 26 22:02:49.899: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 26 22:02:49.899: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan 26 22:02:49.899: INFO: 	Container weave ready: true, restart count 0
Jan 26 22:02:49.899: INFO: 	Container weave-npc ready: true, restart count 0
Jan 26 22:02:49.899: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 26 22:02:49.899: INFO: 	Container kube-scheduler ready: true, restart count 4
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-780abb46-d97e-4e56-9fdc-c2adc85963d0 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-780abb46-d97e-4e56-9fdc-c2adc85963d0 off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-780abb46-d97e-4e56-9fdc-c2adc85963d0
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:03:20.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5573" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:30.728 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":164,"skipped":2665,"failed":0}
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:03:20.402: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 26 22:03:20.575: INFO: Waiting up to 5m0s for pod "pod-4e31ac4d-a91c-4e9f-81a4-38824327834e" in namespace "emptydir-9256" to be "success or failure"
Jan 26 22:03:20.604: INFO: Pod "pod-4e31ac4d-a91c-4e9f-81a4-38824327834e": Phase="Pending", Reason="", readiness=false. Elapsed: 28.927966ms
Jan 26 22:03:22.613: INFO: Pod "pod-4e31ac4d-a91c-4e9f-81a4-38824327834e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037771531s
Jan 26 22:03:24.622: INFO: Pod "pod-4e31ac4d-a91c-4e9f-81a4-38824327834e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046090134s
Jan 26 22:03:26.668: INFO: Pod "pod-4e31ac4d-a91c-4e9f-81a4-38824327834e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.092795567s
Jan 26 22:03:28.676: INFO: Pod "pod-4e31ac4d-a91c-4e9f-81a4-38824327834e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.100339211s
Jan 26 22:03:30.706: INFO: Pod "pod-4e31ac4d-a91c-4e9f-81a4-38824327834e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.13023348s
STEP: Saw pod success
Jan 26 22:03:30.706: INFO: Pod "pod-4e31ac4d-a91c-4e9f-81a4-38824327834e" satisfied condition "success or failure"
Jan 26 22:03:30.710: INFO: Trying to get logs from node jerma-node pod pod-4e31ac4d-a91c-4e9f-81a4-38824327834e container test-container: 
STEP: delete the pod
Jan 26 22:03:30.735: INFO: Waiting for pod pod-4e31ac4d-a91c-4e9f-81a4-38824327834e to disappear
Jan 26 22:03:30.738: INFO: Pod pod-4e31ac4d-a91c-4e9f-81a4-38824327834e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:03:30.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9256" for this suite.

• [SLOW TEST:10.345 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":165,"skipped":2672,"failed":0}
SS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:03:30.748: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Jan 26 22:03:39.509: INFO: Successfully updated pod "annotationupdate180839b4-ddaa-4386-9c0b-bc506792b03c"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:03:41.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8726" for this suite.

• [SLOW TEST:10.884 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":166,"skipped":2674,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:03:41.634: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 26 22:03:41.826: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"3a6be5e4-66fe-4e0b-9905-7776424db553", Controller:(*bool)(0xc002b4e272), BlockOwnerDeletion:(*bool)(0xc002b4e273)}}
Jan 26 22:03:41.835: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"ba5be995-50cc-4b81-9fd1-2067d8045fa0", Controller:(*bool)(0xc002b7327a), BlockOwnerDeletion:(*bool)(0xc002b7327b)}}
Jan 26 22:03:41.904: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"c7e47eba-d66e-4fb3-a90e-6fd2499221aa", Controller:(*bool)(0xc002b4e41a), BlockOwnerDeletion:(*bool)(0xc002b4e41b)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:03:46.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7273" for this suite.

• [SLOW TEST:5.369 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":167,"skipped":2694,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:03:47.004: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 26 22:03:47.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jan 26 22:03:50.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2498 create -f -'
Jan 26 22:03:53.238: INFO: stderr: ""
Jan 26 22:03:53.238: INFO: stdout: "e2e-test-crd-publish-openapi-3302-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Jan 26 22:03:53.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2498 delete e2e-test-crd-publish-openapi-3302-crds test-cr'
Jan 26 22:03:53.358: INFO: stderr: ""
Jan 26 22:03:53.359: INFO: stdout: "e2e-test-crd-publish-openapi-3302-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
Jan 26 22:03:53.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2498 apply -f -'
Jan 26 22:03:53.679: INFO: stderr: ""
Jan 26 22:03:53.679: INFO: stdout: "e2e-test-crd-publish-openapi-3302-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Jan 26 22:03:53.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2498 delete e2e-test-crd-publish-openapi-3302-crds test-cr'
Jan 26 22:03:53.807: INFO: stderr: ""
Jan 26 22:03:53.807: INFO: stdout: "e2e-test-crd-publish-openapi-3302-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
Jan 26 22:03:53.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3302-crds'
Jan 26 22:03:54.146: INFO: stderr: ""
Jan 26 22:03:54.146: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-3302-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:03:57.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2498" for this suite.

• [SLOW TEST:10.985 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":168,"skipped":2706,"failed":0}
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:03:57.989: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 26 22:03:58.089: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:03:58.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-9077" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":278,"completed":169,"skipped":2706,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:03:58.950: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Jan 26 22:03:59.090: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 26 22:03:59.165: INFO: Waiting for terminating namespaces to be deleted...
Jan 26 22:03:59.168: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan 26 22:03:59.175: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan 26 22:03:59.175: INFO: 	Container weave ready: true, restart count 1
Jan 26 22:03:59.175: INFO: 	Container weave-npc ready: true, restart count 0
Jan 26 22:03:59.175: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan 26 22:03:59.175: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 26 22:03:59.175: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan 26 22:03:59.184: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 26 22:03:59.184: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan 26 22:03:59.184: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 26 22:03:59.184: INFO: 	Container etcd ready: true, restart count 1
Jan 26 22:03:59.184: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 26 22:03:59.184: INFO: 	Container coredns ready: true, restart count 0
Jan 26 22:03:59.184: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 26 22:03:59.184: INFO: 	Container coredns ready: true, restart count 0
Jan 26 22:03:59.185: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 26 22:03:59.185: INFO: 	Container kube-controller-manager ready: true, restart count 3
Jan 26 22:03:59.185: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan 26 22:03:59.185: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 26 22:03:59.185: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan 26 22:03:59.185: INFO: 	Container weave ready: true, restart count 0
Jan 26 22:03:59.185: INFO: 	Container weave-npc ready: true, restart count 0
Jan 26 22:03:59.185: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 26 22:03:59.185: INFO: 	Container kube-scheduler ready: true, restart count 4
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15ed8ef7cbaafd62], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:04:00.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-590" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":278,"completed":170,"skipped":2787,"failed":0}
SS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:04:00.231: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 26 22:04:26.424: INFO: Container started at 2020-01-26 22:04:05 +0000 UTC, pod became ready at 2020-01-26 22:04:25 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:04:26.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8121" for this suite.

• [SLOW TEST:26.209 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":171,"skipped":2789,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:04:26.443: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 26 22:04:26.537: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b8155924-76da-4a61-a6a5-6ed13291fd10" in namespace "downward-api-4895" to be "success or failure"
Jan 26 22:04:26.572: INFO: Pod "downwardapi-volume-b8155924-76da-4a61-a6a5-6ed13291fd10": Phase="Pending", Reason="", readiness=false. Elapsed: 34.833482ms
Jan 26 22:04:28.586: INFO: Pod "downwardapi-volume-b8155924-76da-4a61-a6a5-6ed13291fd10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049120398s
Jan 26 22:04:30.608: INFO: Pod "downwardapi-volume-b8155924-76da-4a61-a6a5-6ed13291fd10": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071483854s
Jan 26 22:04:32.622: INFO: Pod "downwardapi-volume-b8155924-76da-4a61-a6a5-6ed13291fd10": Phase="Pending", Reason="", readiness=false. Elapsed: 6.085559517s
Jan 26 22:04:34.630: INFO: Pod "downwardapi-volume-b8155924-76da-4a61-a6a5-6ed13291fd10": Phase="Pending", Reason="", readiness=false. Elapsed: 8.093562602s
Jan 26 22:04:36.640: INFO: Pod "downwardapi-volume-b8155924-76da-4a61-a6a5-6ed13291fd10": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.102786318s
STEP: Saw pod success
Jan 26 22:04:36.640: INFO: Pod "downwardapi-volume-b8155924-76da-4a61-a6a5-6ed13291fd10" satisfied condition "success or failure"
Jan 26 22:04:36.643: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-b8155924-76da-4a61-a6a5-6ed13291fd10 container client-container: 
STEP: delete the pod
Jan 26 22:04:36.874: INFO: Waiting for pod downwardapi-volume-b8155924-76da-4a61-a6a5-6ed13291fd10 to disappear
Jan 26 22:04:36.883: INFO: Pod downwardapi-volume-b8155924-76da-4a61-a6a5-6ed13291fd10 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:04:36.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4895" for this suite.

• [SLOW TEST:10.462 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":172,"skipped":2849,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:04:36.907: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 26 22:04:37.046: INFO: Creating deployment "webserver-deployment"
Jan 26 22:04:37.057: INFO: Waiting for observed generation 1
Jan 26 22:04:39.397: INFO: Waiting for all required pods to come up
Jan 26 22:04:40.306: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Jan 26 22:05:04.891: INFO: Waiting for deployment "webserver-deployment" to complete
Jan 26 22:05:04.901: INFO: Updating deployment "webserver-deployment" with a non-existent image
Jan 26 22:05:04.912: INFO: Updating deployment webserver-deployment
Jan 26 22:05:04.912: INFO: Waiting for observed generation 2
Jan 26 22:05:07.954: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jan 26 22:05:08.009: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jan 26 22:05:08.225: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Jan 26 22:05:08.808: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jan 26 22:05:08.808: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jan 26 22:05:08.811: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Jan 26 22:05:08.815: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Jan 26 22:05:08.815: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Jan 26 22:05:08.827: INFO: Updating deployment webserver-deployment
Jan 26 22:05:08.827: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Jan 26 22:05:09.598: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jan 26 22:05:10.175: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Jan 26 22:05:15.234: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-5737 /apis/apps/v1/namespaces/deployment-5737/deployments/webserver-deployment 0cb7f45b-e7c8-417b-be6e-a691d898af77 4548969 3 2020-01-26 22:04:37 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0041f6d58  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:25,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-01-26 22:05:07 +0000 UTC,LastTransitionTime:2020-01-26 22:04:37 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-01-26 22:05:09 +0000 UTC,LastTransitionTime:2020-01-26 22:05:09 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

Jan 26 22:05:16.089: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8  deployment-5737 /apis/apps/v1/namespaces/deployment-5737/replicasets/webserver-deployment-c7997dcc8 e6052ba4-a2b2-416b-aa6f-be64a97abcd6 4548970 3 2020-01-26 22:05:04 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 0cb7f45b-e7c8-417b-be6e-a691d898af77 0xc00419c147 0xc00419c148}] []  []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00419c1c8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 26 22:05:16.089: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Jan 26 22:05:16.089: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587  deployment-5737 /apis/apps/v1/namespaces/deployment-5737/replicasets/webserver-deployment-595b5b9587 44363e4a-8b05-4cb3-aff4-bf092e598f62 4548960 3 2020-01-26 22:04:37 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 0cb7f45b-e7c8-417b-be6e-a691d898af77 0xc00419c057 0xc00419c058}] []  []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00419c0c8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Jan 26 22:05:17.507: INFO: Pod "webserver-deployment-595b5b9587-4dtgh" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-4dtgh webserver-deployment-595b5b9587- deployment-5737 /api/v1/namespaces/deployment-5737/pods/webserver-deployment-595b5b9587-4dtgh 15e7054f-1ba6-4f84-9958-a656e6a099d6 4548912 0 2020-01-26 22:05:10 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44363e4a-8b05-4cb3-aff4-bf092e598f62 0xc00419c787 0xc00419c788}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrg8b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrg8b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrg8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 26 22:05:17.508: INFO: Pod "webserver-deployment-595b5b9587-7n964" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-7n964 webserver-deployment-595b5b9587- deployment-5737 /api/v1/namespaces/deployment-5737/pods/webserver-deployment-595b5b9587-7n964 df116d35-e37f-423a-a49b-b11ff2996d8d 4548827 0 2020-01-26 22:04:37 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44363e4a-8b05-4cb3-aff4-bf092e598f62 0xc00419c8d7 0xc00419c8d8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrg8b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrg8b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrg8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:04:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:04:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.4,StartTime:2020-01-26 22:04:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-26 22:05:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://ba92c7f945ce1bf7e2b9342f3f01367109b1d0b8ed1a5db867e63000784baba6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 26 22:05:17.508: INFO: Pod "webserver-deployment-595b5b9587-7sth9" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-7sth9 webserver-deployment-595b5b9587- deployment-5737 /api/v1/namespaces/deployment-5737/pods/webserver-deployment-595b5b9587-7sth9 79759b98-7a5b-4e55-9844-498c2a90ae84 4548782 0 2020-01-26 22:04:37 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44363e4a-8b05-4cb3-aff4-bf092e598f62 0xc00419ca70 0xc00419ca71}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrg8b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrg8b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrg8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:04:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:04:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-01-26 22:04:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-26 22:04:58 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://47094da78abbfc754d173e011528ded268d08d500342b58319302f742cd86849,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 26 22:05:17.509: INFO: Pod "webserver-deployment-595b5b9587-8hckp" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-8hckp webserver-deployment-595b5b9587- deployment-5737 /api/v1/namespaces/deployment-5737/pods/webserver-deployment-595b5b9587-8hckp d3b90358-b032-4427-8c2a-31b6bf7d8370 4548938 0 2020-01-26 22:05:10 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44363e4a-8b05-4cb3-aff4-bf092e598f62 0xc00419cc20 0xc00419cc21}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrg8b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrg8b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrg8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 26 22:05:17.509: INFO: Pod "webserver-deployment-595b5b9587-c8dbk" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-c8dbk webserver-deployment-595b5b9587- deployment-5737 /api/v1/namespaces/deployment-5737/pods/webserver-deployment-595b5b9587-c8dbk 3b600d90-75c3-4ced-b6a9-95445ab27973 4548924 0 2020-01-26 22:05:10 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44363e4a-8b05-4cb3-aff4-bf092e598f62 0xc00419cd47 0xc00419cd48}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrg8b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrg8b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrg8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 26 22:05:17.509: INFO: Pod "webserver-deployment-595b5b9587-cc7tt" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-cc7tt webserver-deployment-595b5b9587- deployment-5737 /api/v1/namespaces/deployment-5737/pods/webserver-deployment-595b5b9587-cc7tt e5560880-44c2-42f5-98e7-08d500d6d175 4548923 0 2020-01-26 22:05:10 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44363e4a-8b05-4cb3-aff4-bf092e598f62 0xc00419ce97 0xc00419ce98}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrg8b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrg8b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrg8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 26 22:05:17.510: INFO: Pod "webserver-deployment-595b5b9587-dlqzc" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-dlqzc webserver-deployment-595b5b9587- deployment-5737 /api/v1/namespaces/deployment-5737/pods/webserver-deployment-595b5b9587-dlqzc 4203a548-a619-40f5-959d-a383c52281a2 4548937 0 2020-01-26 22:05:10 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44363e4a-8b05-4cb3-aff4-bf092e598f62 0xc00419d007 0xc00419d008}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrg8b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrg8b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrg8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 26 22:05:17.510: INFO: Pod "webserver-deployment-595b5b9587-fr2dz" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-fr2dz webserver-deployment-595b5b9587- deployment-5737 /api/v1/namespaces/deployment-5737/pods/webserver-deployment-595b5b9587-fr2dz a9bbdf47-0e61-4b2b-acf2-540a02b9aa98 4548803 0 2020-01-26 22:04:37 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44363e4a-8b05-4cb3-aff4-bf092e598f62 0xc00419d127 0xc00419d128}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrg8b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrg8b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrg8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:04:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:04:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.3,StartTime:2020-01-26 22:04:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-26 22:05:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://b4b8eb1a1e8c66d2d5698c19ca736bbfc0b0245182b4c71ecd3db94fb4cf4bfd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.3,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 26 22:05:17.511: INFO: Pod "webserver-deployment-595b5b9587-g2vx4" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-g2vx4 webserver-deployment-595b5b9587- deployment-5737 /api/v1/namespaces/deployment-5737/pods/webserver-deployment-595b5b9587-g2vx4 909a5470-d212-451a-93d8-78c88a37842a 4548818 0 2020-01-26 22:04:37 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44363e4a-8b05-4cb3-aff4-bf092e598f62 0xc00419d2a0 0xc00419d2a1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrg8b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrg8b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrg8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:04:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:04:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.6,StartTime:2020-01-26 22:04:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-26 22:05:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://5027e85d8ba271084dfe0be1505e19c355025e759ad3fe02d295d674aa7afb60,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.6,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 26 22:05:17.512: INFO: Pod "webserver-deployment-595b5b9587-hqwjp" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-hqwjp webserver-deployment-595b5b9587- deployment-5737 /api/v1/namespaces/deployment-5737/pods/webserver-deployment-595b5b9587-hqwjp 02112f8a-a861-4d12-89ca-268f13a1c16f 4548789 0 2020-01-26 22:04:37 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44363e4a-8b05-4cb3-aff4-bf092e598f62 0xc00419d440 0xc00419d441}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrg8b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrg8b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrg8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:04:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:04:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.4,StartTime:2020-01-26 22:04:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-26 22:05:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://b7307dc8e08d0f1195b67947d4dc22e407496007ae14e26be32f0d9b8c13e76e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 26 22:05:17.512: INFO: Pod "webserver-deployment-595b5b9587-jjk44" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-jjk44 webserver-deployment-595b5b9587- deployment-5737 /api/v1/namespaces/deployment-5737/pods/webserver-deployment-595b5b9587-jjk44 8fd3ea2a-e8d3-45a5-93d5-53ddf93ffa79 4548798 0 2020-01-26 22:04:37 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44363e4a-8b05-4cb3-aff4-bf092e598f62 0xc00419d5c0 0xc00419d5c1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrg8b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrg8b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrg8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:04:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:04:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.5,StartTime:2020-01-26 22:04:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-26 22:05:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://687a3014e9a0f9049e42ec817b3981ecf7804e22392ef1b3e48668758b48106b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 26 22:05:17.513: INFO: Pod "webserver-deployment-595b5b9587-kqpv5" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-kqpv5 webserver-deployment-595b5b9587- deployment-5737 /api/v1/namespaces/deployment-5737/pods/webserver-deployment-595b5b9587-kqpv5 ef597509-db5d-480d-badf-15935e67ab84 4548932 0 2020-01-26 22:05:10 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44363e4a-8b05-4cb3-aff4-bf092e598f62 0xc00419d7a0 0xc00419d7a1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrg8b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrg8b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrg8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 26 22:05:17.513: INFO: Pod "webserver-deployment-595b5b9587-ll5ng" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-ll5ng webserver-deployment-595b5b9587- deployment-5737 /api/v1/namespaces/deployment-5737/pods/webserver-deployment-595b5b9587-ll5ng 7fef381d-eee8-45d0-8df4-1f0ffd8c1065 4548945 0 2020-01-26 22:05:10 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44363e4a-8b05-4cb3-aff4-bf092e598f62 0xc00419d8c7 0xc00419d8c8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrg8b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrg8b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrg8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 26 22:05:17.513: INFO: Pod "webserver-deployment-595b5b9587-msx5l" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-msx5l webserver-deployment-595b5b9587- deployment-5737 /api/v1/namespaces/deployment-5737/pods/webserver-deployment-595b5b9587-msx5l 81e10aa4-de98-4198-ad3e-516f8b7b20a5 4548926 0 2020-01-26 22:05:10 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44363e4a-8b05-4cb3-aff4-bf092e598f62 0xc00419da47 0xc00419da48}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrg8b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrg8b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrg8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 26 22:05:17.514: INFO: Pod "webserver-deployment-595b5b9587-rrrsw" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-rrrsw webserver-deployment-595b5b9587- deployment-5737 /api/v1/namespaces/deployment-5737/pods/webserver-deployment-595b5b9587-rrrsw cafcd4bc-efb9-4e2f-99a7-36ac999568e0 4548976 0 2020-01-26 22:05:10 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44363e4a-8b05-4cb3-aff4-bf092e598f62 0xc00419db97 0xc00419db98}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrg8b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrg8b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrg8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-26 22:05:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 26 22:05:17.514: INFO: Pod "webserver-deployment-595b5b9587-t6wl2" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-t6wl2 webserver-deployment-595b5b9587- deployment-5737 /api/v1/namespaces/deployment-5737/pods/webserver-deployment-595b5b9587-t6wl2 38085cde-3891-4074-9d85-90f329b72387 4548824 0 2020-01-26 22:04:37 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44363e4a-8b05-4cb3-aff4-bf092e598f62 0xc00419dcf7 0xc00419dcf8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrg8b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrg8b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrg8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:04:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:04:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.7,StartTime:2020-01-26 22:04:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-26 22:05:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://1c8f78b62898ec7439d87c46f29a6b313b8a48994f6b62e6ab0422c84496425e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.7,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 26 22:05:17.515: INFO: Pod "webserver-deployment-595b5b9587-vgrlc" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-vgrlc webserver-deployment-595b5b9587- deployment-5737 /api/v1/namespaces/deployment-5737/pods/webserver-deployment-595b5b9587-vgrlc 85accbce-128f-4d72-804c-93d8f202f9e4 4548795 0 2020-01-26 22:04:37 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44363e4a-8b05-4cb3-aff4-bf092e598f62 0xc00419dec0 0xc00419dec1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrg8b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrg8b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrg8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:04:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:04:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-01-26 22:04:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-26 22:04:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://629886ed32bb943d2fd311d590e3a40e557efba2b85da45aa9361a0444d29d75,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 26 22:05:17.515: INFO: Pod "webserver-deployment-595b5b9587-wg4xv" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-wg4xv webserver-deployment-595b5b9587- deployment-5737 /api/v1/namespaces/deployment-5737/pods/webserver-deployment-595b5b9587-wg4xv e0038d6a-fbec-423f-bd24-8cfe3d4bf8e4 4548966 0 2020-01-26 22:05:10 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44363e4a-8b05-4cb3-aff4-bf092e598f62 0xc004176050 0xc004176051}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrg8b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrg8b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrg8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-26 22:05:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 26 22:05:17.516: INFO: Pod "webserver-deployment-595b5b9587-wx8sv" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-wx8sv webserver-deployment-595b5b9587- deployment-5737 /api/v1/namespaces/deployment-5737/pods/webserver-deployment-595b5b9587-wx8sv bcb2aca1-9a5e-4ccb-9142-97e54405579c 4548967 0 2020-01-26 22:05:10 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44363e4a-8b05-4cb3-aff4-bf092e598f62 0xc004176267 0xc004176268}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrg8b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrg8b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrg8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-26 22:05:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 26 22:05:17.516: INFO: Pod "webserver-deployment-595b5b9587-zqqdg" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-zqqdg webserver-deployment-595b5b9587- deployment-5737 /api/v1/namespaces/deployment-5737/pods/webserver-deployment-595b5b9587-zqqdg b8d02845-1976-4982-8f83-62eb4e50c222 4548944 0 2020-01-26 22:05:10 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 44363e4a-8b05-4cb3-aff4-bf092e598f62 0xc004176427 0xc004176428}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrg8b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrg8b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrg8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 26 22:05:17.517: INFO: Pod "webserver-deployment-c7997dcc8-6zdlv" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-6zdlv webserver-deployment-c7997dcc8- deployment-5737 /api/v1/namespaces/deployment-5737/pods/webserver-deployment-c7997dcc8-6zdlv baedea19-355c-428d-bb9d-89497a3ec01a 4548901 0 2020-01-26 22:05:10 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e6052ba4-a2b2-416b-aa6f-be64a97abcd6 0xc004176597 0xc004176598}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrg8b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrg8b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrg8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 26 22:05:17.517: INFO: Pod "webserver-deployment-c7997dcc8-84r84" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-84r84 webserver-deployment-c7997dcc8- deployment-5737 /api/v1/namespaces/deployment-5737/pods/webserver-deployment-c7997dcc8-84r84 03e1365b-2f40-402b-aa13-b5111e6d6edf 4548853 0 2020-01-26 22:05:04 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e6052ba4-a2b2-416b-aa6f-be64a97abcd6 0xc004176767 0xc004176768}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrg8b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrg8b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrg8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-26 22:05:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 26 22:05:17.518: INFO: Pod "webserver-deployment-c7997dcc8-c27bj" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-c27bj webserver-deployment-c7997dcc8- deployment-5737 /api/v1/namespaces/deployment-5737/pods/webserver-deployment-c7997dcc8-c27bj e59eb66e-277a-4100-8703-5910d044e6d0 4548911 0 2020-01-26 22:05:10 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e6052ba4-a2b2-416b-aa6f-be64a97abcd6 0xc004176957 0xc004176958}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrg8b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrg8b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrg8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 26 22:05:17.518: INFO: Pod "webserver-deployment-c7997dcc8-dzwc4" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-dzwc4 webserver-deployment-c7997dcc8- deployment-5737 /api/v1/namespaces/deployment-5737/pods/webserver-deployment-c7997dcc8-dzwc4 c5348996-0c60-4265-b3f9-225d89ccc741 4548948 0 2020-01-26 22:05:10 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e6052ba4-a2b2-416b-aa6f-be64a97abcd6 0xc004176ab7 0xc004176ab8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrg8b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrg8b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrg8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 26 22:05:17.518: INFO: Pod "webserver-deployment-c7997dcc8-jmscv" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jmscv webserver-deployment-c7997dcc8- deployment-5737 /api/v1/namespaces/deployment-5737/pods/webserver-deployment-c7997dcc8-jmscv 0856491f-3157-40bc-8bd1-3da814212602 4548962 0 2020-01-26 22:05:11 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e6052ba4-a2b2-416b-aa6f-be64a97abcd6 0xc004176c07 0xc004176c08}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrg8b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrg8b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrg8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 26 22:05:17.519: INFO: Pod "webserver-deployment-c7997dcc8-k94ds" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-k94ds webserver-deployment-c7997dcc8- deployment-5737 /api/v1/namespaces/deployment-5737/pods/webserver-deployment-c7997dcc8-k94ds 72aa2f88-507a-4123-a18a-120bf797311e 4548914 0 2020-01-26 22:05:10 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e6052ba4-a2b2-416b-aa6f-be64a97abcd6 0xc004176d87 0xc004176d88}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrg8b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrg8b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrg8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 26 22:05:17.519: INFO: Pod "webserver-deployment-c7997dcc8-mpdhm" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-mpdhm webserver-deployment-c7997dcc8- deployment-5737 /api/v1/namespaces/deployment-5737/pods/webserver-deployment-c7997dcc8-mpdhm d72db344-b222-49b8-a7b1-06808283f5e1 4548947 0 2020-01-26 22:05:10 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e6052ba4-a2b2-416b-aa6f-be64a97abcd6 0xc004176ea7 0xc004176ea8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrg8b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrg8b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrg8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 26 22:05:17.520: INFO: Pod "webserver-deployment-c7997dcc8-nvs4l" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-nvs4l webserver-deployment-c7997dcc8- deployment-5737 /api/v1/namespaces/deployment-5737/pods/webserver-deployment-c7997dcc8-nvs4l 05e2dd11-8271-4a61-baae-35bca419195d 4548876 0 2020-01-26 22:05:05 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e6052ba4-a2b2-416b-aa6f-be64a97abcd6 0xc004176fd7 0xc004176fd8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrg8b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrg8b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrg8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-26 22:05:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 26 22:05:17.520: INFO: Pod "webserver-deployment-c7997dcc8-q7t4j" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-q7t4j webserver-deployment-c7997dcc8- deployment-5737 /api/v1/namespaces/deployment-5737/pods/webserver-deployment-c7997dcc8-q7t4j 962dc34a-2b26-4f80-80d9-1871117e6a8c 4548939 0 2020-01-26 22:05:10 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e6052ba4-a2b2-416b-aa6f-be64a97abcd6 0xc004177187 0xc004177188}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrg8b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrg8b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrg8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 26 22:05:17.520: INFO: Pod "webserver-deployment-c7997dcc8-q8gwx" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-q8gwx webserver-deployment-c7997dcc8- deployment-5737 /api/v1/namespaces/deployment-5737/pods/webserver-deployment-c7997dcc8-q8gwx 30fda3fa-bf9a-4ce5-b983-aab7ea2b259d 4548869 0 2020-01-26 22:05:05 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e6052ba4-a2b2-416b-aa6f-be64a97abcd6 0xc0041772b7 0xc0041772b8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrg8b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrg8b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrg8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-26 22:05:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 26 22:05:17.521: INFO: Pod "webserver-deployment-c7997dcc8-qbvl7" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-qbvl7 webserver-deployment-c7997dcc8- deployment-5737 /api/v1/namespaces/deployment-5737/pods/webserver-deployment-c7997dcc8-qbvl7 a1f4562a-c7e1-4415-b23d-46186845a82d 4548854 0 2020-01-26 22:05:05 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e6052ba4-a2b2-416b-aa6f-be64a97abcd6 0xc004177437 0xc004177438}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrg8b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrg8b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrg8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-26 22:05:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 26 22:05:17.521: INFO: Pod "webserver-deployment-c7997dcc8-tzrhv" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-tzrhv webserver-deployment-c7997dcc8- deployment-5737 /api/v1/namespaces/deployment-5737/pods/webserver-deployment-c7997dcc8-tzrhv 5c0803c7-256e-465b-a8cd-888f5e9b51c5 4548936 0 2020-01-26 22:05:10 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e6052ba4-a2b2-416b-aa6f-be64a97abcd6 0xc004177657 0xc004177658}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrg8b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrg8b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrg8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 26 22:05:17.522: INFO: Pod "webserver-deployment-c7997dcc8-w4psg" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-w4psg webserver-deployment-c7997dcc8- deployment-5737 /api/v1/namespaces/deployment-5737/pods/webserver-deployment-c7997dcc8-w4psg 4063e312-80d0-420e-88a0-95598146b1f0 4548880 0 2020-01-26 22:05:05 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e6052ba4-a2b2-416b-aa6f-be64a97abcd6 0xc0041777a7 0xc0041777a8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrg8b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrg8b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrg8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:05:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-26 22:05:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:05:17.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5737" for this suite.

• [SLOW TEST:44.278 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":173,"skipped":2857,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:05:21.188: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting the proxy server
Jan 26 22:05:25.794: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:05:25.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1271" for this suite.

• [SLOW TEST:5.252 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1955
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":278,"completed":174,"skipped":2887,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:05:26.446: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating secret secrets-2329/secret-test-849ecd80-77c4-44cb-95c6-e04c34f8a4ca
STEP: Creating a pod to test consume secrets
Jan 26 22:05:27.712: INFO: Waiting up to 5m0s for pod "pod-configmaps-0ed8765f-864f-4af2-a989-cfaa663a262e" in namespace "secrets-2329" to be "success or failure"
Jan 26 22:05:28.028: INFO: Pod "pod-configmaps-0ed8765f-864f-4af2-a989-cfaa663a262e": Phase="Pending", Reason="", readiness=false. Elapsed: 316.46504ms
Jan 26 22:05:30.951: INFO: Pod "pod-configmaps-0ed8765f-864f-4af2-a989-cfaa663a262e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.239081685s
Jan 26 22:05:34.012: INFO: Pod "pod-configmaps-0ed8765f-864f-4af2-a989-cfaa663a262e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.300196004s
Jan 26 22:05:36.713: INFO: Pod "pod-configmaps-0ed8765f-864f-4af2-a989-cfaa663a262e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.001615993s
Jan 26 22:05:38.949: INFO: Pod "pod-configmaps-0ed8765f-864f-4af2-a989-cfaa663a262e": Phase="Pending", Reason="", readiness=false. Elapsed: 11.237341207s
Jan 26 22:05:42.558: INFO: Pod "pod-configmaps-0ed8765f-864f-4af2-a989-cfaa663a262e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.846507797s
Jan 26 22:05:45.202: INFO: Pod "pod-configmaps-0ed8765f-864f-4af2-a989-cfaa663a262e": Phase="Pending", Reason="", readiness=false. Elapsed: 17.490543253s
Jan 26 22:05:47.260: INFO: Pod "pod-configmaps-0ed8765f-864f-4af2-a989-cfaa663a262e": Phase="Pending", Reason="", readiness=false. Elapsed: 19.547986649s
Jan 26 22:05:49.622: INFO: Pod "pod-configmaps-0ed8765f-864f-4af2-a989-cfaa663a262e": Phase="Pending", Reason="", readiness=false. Elapsed: 21.90973105s
Jan 26 22:05:53.243: INFO: Pod "pod-configmaps-0ed8765f-864f-4af2-a989-cfaa663a262e": Phase="Pending", Reason="", readiness=false. Elapsed: 25.531260779s
Jan 26 22:05:57.023: INFO: Pod "pod-configmaps-0ed8765f-864f-4af2-a989-cfaa663a262e": Phase="Pending", Reason="", readiness=false. Elapsed: 29.311635863s
Jan 26 22:06:01.407: INFO: Pod "pod-configmaps-0ed8765f-864f-4af2-a989-cfaa663a262e": Phase="Pending", Reason="", readiness=false. Elapsed: 33.694850008s
Jan 26 22:06:04.544: INFO: Pod "pod-configmaps-0ed8765f-864f-4af2-a989-cfaa663a262e": Phase="Pending", Reason="", readiness=false. Elapsed: 36.832035082s
Jan 26 22:06:06.825: INFO: Pod "pod-configmaps-0ed8765f-864f-4af2-a989-cfaa663a262e": Phase="Pending", Reason="", readiness=false. Elapsed: 39.113281421s
Jan 26 22:06:12.056: INFO: Pod "pod-configmaps-0ed8765f-864f-4af2-a989-cfaa663a262e": Phase="Pending", Reason="", readiness=false. Elapsed: 44.344591101s
Jan 26 22:06:14.269: INFO: Pod "pod-configmaps-0ed8765f-864f-4af2-a989-cfaa663a262e": Phase="Pending", Reason="", readiness=false. Elapsed: 46.557134191s
Jan 26 22:06:16.824: INFO: Pod "pod-configmaps-0ed8765f-864f-4af2-a989-cfaa663a262e": Phase="Pending", Reason="", readiness=false. Elapsed: 49.112673469s
Jan 26 22:06:19.621: INFO: Pod "pod-configmaps-0ed8765f-864f-4af2-a989-cfaa663a262e": Phase="Pending", Reason="", readiness=false. Elapsed: 51.90894019s
Jan 26 22:06:21.739: INFO: Pod "pod-configmaps-0ed8765f-864f-4af2-a989-cfaa663a262e": Phase="Pending", Reason="", readiness=false. Elapsed: 54.027457504s
Jan 26 22:06:24.017: INFO: Pod "pod-configmaps-0ed8765f-864f-4af2-a989-cfaa663a262e": Phase="Pending", Reason="", readiness=false. Elapsed: 56.304964316s
Jan 26 22:06:26.135: INFO: Pod "pod-configmaps-0ed8765f-864f-4af2-a989-cfaa663a262e": Phase="Pending", Reason="", readiness=false. Elapsed: 58.423577115s
Jan 26 22:06:28.207: INFO: Pod "pod-configmaps-0ed8765f-864f-4af2-a989-cfaa663a262e": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.495033648s
Jan 26 22:06:30.217: INFO: Pod "pod-configmaps-0ed8765f-864f-4af2-a989-cfaa663a262e": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.505551641s
Jan 26 22:06:32.224: INFO: Pod "pod-configmaps-0ed8765f-864f-4af2-a989-cfaa663a262e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m4.512422535s
STEP: Saw pod success
Jan 26 22:06:32.224: INFO: Pod "pod-configmaps-0ed8765f-864f-4af2-a989-cfaa663a262e" satisfied condition "success or failure"
Jan 26 22:06:32.228: INFO: Trying to get logs from node jerma-node pod pod-configmaps-0ed8765f-864f-4af2-a989-cfaa663a262e container env-test: 
STEP: delete the pod
Jan 26 22:06:32.763: INFO: Waiting for pod pod-configmaps-0ed8765f-864f-4af2-a989-cfaa663a262e to disappear
Jan 26 22:06:32.770: INFO: Pod pod-configmaps-0ed8765f-864f-4af2-a989-cfaa663a262e no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:06:32.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2329" for this suite.

• [SLOW TEST:66.334 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":175,"skipped":2919,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:06:32.781: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Jan 26 22:06:32.903: INFO: Waiting up to 5m0s for pod "downward-api-23da553a-5511-441e-bfe4-60abe97a267c" in namespace "downward-api-2868" to be "success or failure"
Jan 26 22:06:32.929: INFO: Pod "downward-api-23da553a-5511-441e-bfe4-60abe97a267c": Phase="Pending", Reason="", readiness=false. Elapsed: 26.434782ms
Jan 26 22:06:34.938: INFO: Pod "downward-api-23da553a-5511-441e-bfe4-60abe97a267c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034971487s
Jan 26 22:06:36.946: INFO: Pod "downward-api-23da553a-5511-441e-bfe4-60abe97a267c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043151538s
Jan 26 22:06:38.968: INFO: Pod "downward-api-23da553a-5511-441e-bfe4-60abe97a267c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065420941s
Jan 26 22:06:40.974: INFO: Pod "downward-api-23da553a-5511-441e-bfe4-60abe97a267c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.070776641s
STEP: Saw pod success
Jan 26 22:06:40.974: INFO: Pod "downward-api-23da553a-5511-441e-bfe4-60abe97a267c" satisfied condition "success or failure"
Jan 26 22:06:40.977: INFO: Trying to get logs from node jerma-node pod downward-api-23da553a-5511-441e-bfe4-60abe97a267c container dapi-container: 
STEP: delete the pod
Jan 26 22:06:41.009: INFO: Waiting for pod downward-api-23da553a-5511-441e-bfe4-60abe97a267c to disappear
Jan 26 22:06:41.016: INFO: Pod downward-api-23da553a-5511-441e-bfe4-60abe97a267c no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:06:41.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2868" for this suite.

• [SLOW TEST:8.254 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":176,"skipped":2941,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:06:41.037: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 26 22:06:41.559: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 26 22:06:43.600: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673201, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673201, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673201, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673201, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 22:06:45.609: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673201, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673201, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673201, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673201, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 22:06:47.613: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673201, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673201, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673201, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673201, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 26 22:06:50.669: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:06:51.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7325" for this suite.
STEP: Destroying namespace "webhook-7325-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.643 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":177,"skipped":2959,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:06:51.681: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 26 22:06:52.005: INFO: Waiting up to 5m0s for pod "pod-1ab2e233-1530-4377-8ad3-cfec5887ae53" in namespace "emptydir-9899" to be "success or failure"
Jan 26 22:06:52.148: INFO: Pod "pod-1ab2e233-1530-4377-8ad3-cfec5887ae53": Phase="Pending", Reason="", readiness=false. Elapsed: 142.32125ms
Jan 26 22:06:54.155: INFO: Pod "pod-1ab2e233-1530-4377-8ad3-cfec5887ae53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.150011299s
Jan 26 22:06:56.165: INFO: Pod "pod-1ab2e233-1530-4377-8ad3-cfec5887ae53": Phase="Pending", Reason="", readiness=false. Elapsed: 4.160141355s
Jan 26 22:06:58.218: INFO: Pod "pod-1ab2e233-1530-4377-8ad3-cfec5887ae53": Phase="Pending", Reason="", readiness=false. Elapsed: 6.212671281s
Jan 26 22:07:00.227: INFO: Pod "pod-1ab2e233-1530-4377-8ad3-cfec5887ae53": Phase="Pending", Reason="", readiness=false. Elapsed: 8.221408741s
Jan 26 22:07:02.232: INFO: Pod "pod-1ab2e233-1530-4377-8ad3-cfec5887ae53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.226932893s
STEP: Saw pod success
Jan 26 22:07:02.232: INFO: Pod "pod-1ab2e233-1530-4377-8ad3-cfec5887ae53" satisfied condition "success or failure"
Jan 26 22:07:02.235: INFO: Trying to get logs from node jerma-node pod pod-1ab2e233-1530-4377-8ad3-cfec5887ae53 container test-container: 
STEP: delete the pod
Jan 26 22:07:02.276: INFO: Waiting for pod pod-1ab2e233-1530-4377-8ad3-cfec5887ae53 to disappear
Jan 26 22:07:02.294: INFO: Pod pod-1ab2e233-1530-4377-8ad3-cfec5887ae53 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:07:02.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9899" for this suite.

• [SLOW TEST:10.701 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":178,"skipped":2983,"failed":0}
SSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:07:02.383: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-1750/configmap-test-0a4441b5-f904-42f3-a8b5-b2527a520431
STEP: Creating a pod to test consume configMaps
Jan 26 22:07:02.570: INFO: Waiting up to 5m0s for pod "pod-configmaps-bf8965d2-42a6-471e-94fe-fd9a49e48d85" in namespace "configmap-1750" to be "success or failure"
Jan 26 22:07:02.595: INFO: Pod "pod-configmaps-bf8965d2-42a6-471e-94fe-fd9a49e48d85": Phase="Pending", Reason="", readiness=false. Elapsed: 24.449435ms
Jan 26 22:07:04.617: INFO: Pod "pod-configmaps-bf8965d2-42a6-471e-94fe-fd9a49e48d85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046687914s
Jan 26 22:07:06.633: INFO: Pod "pod-configmaps-bf8965d2-42a6-471e-94fe-fd9a49e48d85": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062518643s
Jan 26 22:07:08.642: INFO: Pod "pod-configmaps-bf8965d2-42a6-471e-94fe-fd9a49e48d85": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071235777s
Jan 26 22:07:10.649: INFO: Pod "pod-configmaps-bf8965d2-42a6-471e-94fe-fd9a49e48d85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.078830549s
STEP: Saw pod success
Jan 26 22:07:10.649: INFO: Pod "pod-configmaps-bf8965d2-42a6-471e-94fe-fd9a49e48d85" satisfied condition "success or failure"
Jan 26 22:07:10.653: INFO: Trying to get logs from node jerma-node pod pod-configmaps-bf8965d2-42a6-471e-94fe-fd9a49e48d85 container env-test: 
STEP: delete the pod
Jan 26 22:07:10.799: INFO: Waiting for pod pod-configmaps-bf8965d2-42a6-471e-94fe-fd9a49e48d85 to disappear
Jan 26 22:07:10.815: INFO: Pod pod-configmaps-bf8965d2-42a6-471e-94fe-fd9a49e48d85 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:07:10.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1750" for this suite.

• [SLOW TEST:8.589 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":179,"skipped":2990,"failed":0}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:07:10.973: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 26 22:07:11.127: INFO: Waiting up to 5m0s for pod "pod-7a876a8a-3127-43fa-abba-1251d2640b78" in namespace "emptydir-8528" to be "success or failure"
Jan 26 22:07:11.240: INFO: Pod "pod-7a876a8a-3127-43fa-abba-1251d2640b78": Phase="Pending", Reason="", readiness=false. Elapsed: 112.867587ms
Jan 26 22:07:13.249: INFO: Pod "pod-7a876a8a-3127-43fa-abba-1251d2640b78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122052556s
Jan 26 22:07:15.257: INFO: Pod "pod-7a876a8a-3127-43fa-abba-1251d2640b78": Phase="Pending", Reason="", readiness=false. Elapsed: 4.129229137s
Jan 26 22:07:17.402: INFO: Pod "pod-7a876a8a-3127-43fa-abba-1251d2640b78": Phase="Pending", Reason="", readiness=false. Elapsed: 6.274409833s
Jan 26 22:07:19.419: INFO: Pod "pod-7a876a8a-3127-43fa-abba-1251d2640b78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.291910795s
STEP: Saw pod success
Jan 26 22:07:19.419: INFO: Pod "pod-7a876a8a-3127-43fa-abba-1251d2640b78" satisfied condition "success or failure"
Jan 26 22:07:19.427: INFO: Trying to get logs from node jerma-node pod pod-7a876a8a-3127-43fa-abba-1251d2640b78 container test-container: 
STEP: delete the pod
Jan 26 22:07:19.676: INFO: Waiting for pod pod-7a876a8a-3127-43fa-abba-1251d2640b78 to disappear
Jan 26 22:07:19.693: INFO: Pod pod-7a876a8a-3127-43fa-abba-1251d2640b78 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:07:19.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8528" for this suite.

• [SLOW TEST:8.748 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":180,"skipped":2995,"failed":0}
SSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:07:19.721: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:07:36.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4166" for this suite.

• [SLOW TEST:16.720 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":181,"skipped":2998,"failed":0}
S
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:07:36.442: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 26 22:07:36.619: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d115cdf9-27ea-414a-9e09-cf4e12d67f41" in namespace "downward-api-1958" to be "success or failure"
Jan 26 22:07:36.654: INFO: Pod "downwardapi-volume-d115cdf9-27ea-414a-9e09-cf4e12d67f41": Phase="Pending", Reason="", readiness=false. Elapsed: 34.696946ms
Jan 26 22:07:38.665: INFO: Pod "downwardapi-volume-d115cdf9-27ea-414a-9e09-cf4e12d67f41": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045912369s
Jan 26 22:07:40.671: INFO: Pod "downwardapi-volume-d115cdf9-27ea-414a-9e09-cf4e12d67f41": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051559993s
Jan 26 22:07:42.679: INFO: Pod "downwardapi-volume-d115cdf9-27ea-414a-9e09-cf4e12d67f41": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059024292s
Jan 26 22:07:44.687: INFO: Pod "downwardapi-volume-d115cdf9-27ea-414a-9e09-cf4e12d67f41": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067977979s
Jan 26 22:07:46.695: INFO: Pod "downwardapi-volume-d115cdf9-27ea-414a-9e09-cf4e12d67f41": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.075821124s
STEP: Saw pod success
Jan 26 22:07:46.695: INFO: Pod "downwardapi-volume-d115cdf9-27ea-414a-9e09-cf4e12d67f41" satisfied condition "success or failure"
Jan 26 22:07:46.701: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-d115cdf9-27ea-414a-9e09-cf4e12d67f41 container client-container: 
STEP: delete the pod
Jan 26 22:07:46.798: INFO: Waiting for pod downwardapi-volume-d115cdf9-27ea-414a-9e09-cf4e12d67f41 to disappear
Jan 26 22:07:46.808: INFO: Pod downwardapi-volume-d115cdf9-27ea-414a-9e09-cf4e12d67f41 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:07:46.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1958" for this suite.

• [SLOW TEST:10.379 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":182,"skipped":2999,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:07:46.822: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Jan 26 22:07:46.931: INFO: PodSpec: initContainers in spec.initContainers
Jan 26 22:08:41.385: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-9e1e72c4-bf99-4eac-8d93-f3fced375d9c", GenerateName:"", Namespace:"init-container-2339", SelfLink:"/api/v1/namespaces/init-container-2339/pods/pod-init-9e1e72c4-bf99-4eac-8d93-f3fced375d9c", UID:"ae90c033-7679-4fb7-9657-406eea54472e", ResourceVersion:"4549924", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63715673266, loc:(*time.Location)(0x7d100a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"931334229"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-dtvv7", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc005dd2100), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dtvv7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dtvv7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dtvv7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc004590298), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0026945a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004590320)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004590340)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc004590348), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00459034c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673267, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673267, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673267, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673266, loc:(*time.Location)(0x7d100a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.2.250", PodIP:"10.44.0.1", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.44.0.1"}}, StartTime:(*v1.Time)(0xc0045c8160), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0012d2150)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://9c57aebecb72cd842b9322461e12f7e7e810d501a03654aeb4f1a6dd810a9cde", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0045c81a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0045c8180), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc0045903df)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:08:41.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2339" for this suite.

• [SLOW TEST:54.610 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":183,"skipped":3033,"failed":0}
SS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:08:41.433: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:08:52.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6322" for this suite.

• [SLOW TEST:11.495 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":184,"skipped":3035,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:08:52.929: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 26 22:08:53.745: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 26 22:08:55.776: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673333, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673333, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673333, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673333, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 22:08:57.787: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673333, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673333, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673333, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673333, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 22:08:59.784: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673333, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673333, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673333, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673333, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 26 22:09:02.835: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:09:03.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3742" for this suite.
STEP: Destroying namespace "webhook-3742-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.267 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":185,"skipped":3061,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:09:03.200: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:09:08.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-812" for this suite.

• [SLOW TEST:5.085 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":186,"skipped":3102,"failed":0}
SS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:09:08.286: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 26 22:09:08.519: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:09:09.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6488" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":278,"completed":187,"skipped":3104,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:09:09.209: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Jan 26 22:09:09.282: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 26 22:09:09.454: INFO: Waiting for terminating namespaces to be deleted...
Jan 26 22:09:09.460: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan 26 22:09:09.470: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan 26 22:09:09.470: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 26 22:09:09.470: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan 26 22:09:09.471: INFO: 	Container weave ready: true, restart count 1
Jan 26 22:09:09.471: INFO: 	Container weave-npc ready: true, restart count 0
Jan 26 22:09:09.471: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan 26 22:09:09.497: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 26 22:09:09.497: INFO: 	Container kube-controller-manager ready: true, restart count 3
Jan 26 22:09:09.497: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan 26 22:09:09.497: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 26 22:09:09.497: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan 26 22:09:09.497: INFO: 	Container weave ready: true, restart count 0
Jan 26 22:09:09.497: INFO: 	Container weave-npc ready: true, restart count 0
Jan 26 22:09:09.497: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 26 22:09:09.497: INFO: 	Container kube-scheduler ready: true, restart count 4
Jan 26 22:09:09.497: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 26 22:09:09.497: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan 26 22:09:09.497: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 26 22:09:09.497: INFO: 	Container etcd ready: true, restart count 1
Jan 26 22:09:09.497: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 26 22:09:09.497: INFO: 	Container coredns ready: true, restart count 0
Jan 26 22:09:09.497: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 26 22:09:09.497: INFO: 	Container coredns ready: true, restart count 0
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-10b178cd-9c85-4219-b20e-fa288e7eacf3 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-10b178cd-9c85-4219-b20e-fa288e7eacf3 off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-10b178cd-9c85-4219-b20e-fa288e7eacf3
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:14:24.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-3451" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:315.847 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":188,"skipped":3157,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:14:25.058: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 26 22:14:25.313: INFO: Creating deployment "test-recreate-deployment"
Jan 26 22:14:25.325: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jan 26 22:14:25.496: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Jan 26 22:14:27.509: INFO: Waiting deployment "test-recreate-deployment" to complete
Jan 26 22:14:27.513: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673665, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673665, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673665, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673665, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 22:14:29.538: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673665, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673665, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673665, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673665, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 22:14:31.552: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673665, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673665, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673665, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673665, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 22:14:33.522: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673665, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673665, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673665, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673665, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 22:14:35.530: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673665, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673665, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673665, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673665, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 22:14:37.531: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jan 26 22:14:37.543: INFO: Updating deployment test-recreate-deployment
Jan 26 22:14:37.543: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Jan 26 22:14:37.868: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-6135 /apis/apps/v1/namespaces/deployment-6135/deployments/test-recreate-deployment 6c457f95-d618-4003-bc8f-4f4519f9dfcc 4551064 2 2020-01-26 22:14:25 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003fc8d08  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-01-26 22:14:37 +0000 UTC,LastTransitionTime:2020-01-26 22:14:37 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-01-26 22:14:37 +0000 UTC,LastTransitionTime:2020-01-26 22:14:25 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

Jan 26 22:14:37.917: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff  deployment-6135 /apis/apps/v1/namespaces/deployment-6135/replicasets/test-recreate-deployment-5f94c574ff e7a7310a-acce-42ce-bd3b-1c51d32e50c2 4551063 1 2020-01-26 22:14:37 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 6c457f95-d618-4003-bc8f-4f4519f9dfcc 0xc003fc9097 0xc003fc9098}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003fc90f8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 26 22:14:37.917: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jan 26 22:14:37.917: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856  deployment-6135 /apis/apps/v1/namespaces/deployment-6135/replicasets/test-recreate-deployment-799c574856 816cb9eb-5632-4fbe-b16c-eb3bfdfa1856 4551054 2 2020-01-26 22:14:25 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 6c457f95-d618-4003-bc8f-4f4519f9dfcc 0xc003fc9167 0xc003fc9168}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003fc91d8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 26 22:14:37.923: INFO: Pod "test-recreate-deployment-5f94c574ff-wgfhb" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-wgfhb test-recreate-deployment-5f94c574ff- deployment-6135 /api/v1/namespaces/deployment-6135/pods/test-recreate-deployment-5f94c574ff-wgfhb e2e06605-424a-4bba-86e5-e272a699dee6 4551065 0 2020-01-26 22:14:37 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff e7a7310a-acce-42ce-bd3b-1c51d32e50c2 0xc003fc9617 0xc003fc9618}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gtmwx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gtmwx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gtmwx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:14:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:14:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:14:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:14:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-26 22:14:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:14:37.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6135" for this suite.

• [SLOW TEST:12.876 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":189,"skipped":3169,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:14:37.934: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Jan 26 22:14:38.104: INFO: Waiting up to 5m0s for pod "downward-api-e9f7ede1-da53-4762-ac26-fc90040ebe3f" in namespace "downward-api-2662" to be "success or failure"
Jan 26 22:14:38.131: INFO: Pod "downward-api-e9f7ede1-da53-4762-ac26-fc90040ebe3f": Phase="Pending", Reason="", readiness=false. Elapsed: 26.83755ms
Jan 26 22:14:40.139: INFO: Pod "downward-api-e9f7ede1-da53-4762-ac26-fc90040ebe3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035415155s
Jan 26 22:14:42.147: INFO: Pod "downward-api-e9f7ede1-da53-4762-ac26-fc90040ebe3f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042633133s
Jan 26 22:14:44.158: INFO: Pod "downward-api-e9f7ede1-da53-4762-ac26-fc90040ebe3f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054287407s
Jan 26 22:14:46.230: INFO: Pod "downward-api-e9f7ede1-da53-4762-ac26-fc90040ebe3f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.126372582s
Jan 26 22:14:48.237: INFO: Pod "downward-api-e9f7ede1-da53-4762-ac26-fc90040ebe3f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.132457741s
Jan 26 22:14:50.244: INFO: Pod "downward-api-e9f7ede1-da53-4762-ac26-fc90040ebe3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.140047457s
STEP: Saw pod success
Jan 26 22:14:50.244: INFO: Pod "downward-api-e9f7ede1-da53-4762-ac26-fc90040ebe3f" satisfied condition "success or failure"
Jan 26 22:14:50.252: INFO: Trying to get logs from node jerma-node pod downward-api-e9f7ede1-da53-4762-ac26-fc90040ebe3f container dapi-container: 
STEP: delete the pod
Jan 26 22:14:50.701: INFO: Waiting for pod downward-api-e9f7ede1-da53-4762-ac26-fc90040ebe3f to disappear
Jan 26 22:14:50.733: INFO: Pod downward-api-e9f7ede1-da53-4762-ac26-fc90040ebe3f no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:14:50.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2662" for this suite.

• [SLOW TEST:12.815 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":190,"skipped":3180,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:14:50.750: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 26 22:14:58.761: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:14:58.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3652" for this suite.

• [SLOW TEST:8.100 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":191,"skipped":3190,"failed":0}
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:14:58.850: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-8525
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 26 22:14:59.025: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 26 22:15:35.284: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-8525 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 26 22:15:35.284: INFO: >>> kubeConfig: /root/.kube/config
I0126 22:15:35.337772       8 log.go:172] (0xc000b426e0) (0xc0021fcd20) Create stream
I0126 22:15:35.337861       8 log.go:172] (0xc000b426e0) (0xc0021fcd20) Stream added, broadcasting: 1
I0126 22:15:35.341500       8 log.go:172] (0xc000b426e0) Reply frame received for 1
I0126 22:15:35.341539       8 log.go:172] (0xc000b426e0) (0xc0019fc000) Create stream
I0126 22:15:35.341554       8 log.go:172] (0xc000b426e0) (0xc0019fc000) Stream added, broadcasting: 3
I0126 22:15:35.342892       8 log.go:172] (0xc000b426e0) Reply frame received for 3
I0126 22:15:35.342916       8 log.go:172] (0xc000b426e0) (0xc0021fcf00) Create stream
I0126 22:15:35.342924       8 log.go:172] (0xc000b426e0) (0xc0021fcf00) Stream added, broadcasting: 5
I0126 22:15:35.346469       8 log.go:172] (0xc000b426e0) Reply frame received for 5
I0126 22:15:35.419974       8 log.go:172] (0xc000b426e0) Data frame received for 3
I0126 22:15:35.420036       8 log.go:172] (0xc0019fc000) (3) Data frame handling
I0126 22:15:35.420073       8 log.go:172] (0xc0019fc000) (3) Data frame sent
I0126 22:15:35.563633       8 log.go:172] (0xc000b426e0) Data frame received for 1
I0126 22:15:35.563749       8 log.go:172] (0xc0021fcd20) (1) Data frame handling
I0126 22:15:35.563808       8 log.go:172] (0xc0021fcd20) (1) Data frame sent
I0126 22:15:35.563878       8 log.go:172] (0xc000b426e0) (0xc0021fcd20) Stream removed, broadcasting: 1
I0126 22:15:35.564639       8 log.go:172] (0xc000b426e0) (0xc0021fcf00) Stream removed, broadcasting: 5
I0126 22:15:35.564902       8 log.go:172] (0xc000b426e0) (0xc0019fc000) Stream removed, broadcasting: 3
I0126 22:15:35.565284       8 log.go:172] (0xc000b426e0) (0xc0021fcd20) Stream removed, broadcasting: 1
I0126 22:15:35.565396       8 log.go:172] (0xc000b426e0) Go away received
I0126 22:15:35.565618       8 log.go:172] (0xc000b426e0) (0xc0019fc000) Stream removed, broadcasting: 3
I0126 22:15:35.565680       8 log.go:172] (0xc000b426e0) (0xc0021fcf00) Stream removed, broadcasting: 5
Jan 26 22:15:35.566: INFO: Waiting for responses: map[]
Jan 26 22:15:35.572: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-8525 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 26 22:15:35.573: INFO: >>> kubeConfig: /root/.kube/config
I0126 22:15:35.626889       8 log.go:172] (0xc000e4b760) (0xc002427720) Create stream
I0126 22:15:35.627056       8 log.go:172] (0xc000e4b760) (0xc002427720) Stream added, broadcasting: 1
I0126 22:15:35.633279       8 log.go:172] (0xc000e4b760) Reply frame received for 1
I0126 22:15:35.633412       8 log.go:172] (0xc000e4b760) (0xc0027f5040) Create stream
I0126 22:15:35.633428       8 log.go:172] (0xc000e4b760) (0xc0027f5040) Stream added, broadcasting: 3
I0126 22:15:35.636045       8 log.go:172] (0xc000e4b760) Reply frame received for 3
I0126 22:15:35.636210       8 log.go:172] (0xc000e4b760) (0xc002427860) Create stream
I0126 22:15:35.636266       8 log.go:172] (0xc000e4b760) (0xc002427860) Stream added, broadcasting: 5
I0126 22:15:35.639301       8 log.go:172] (0xc000e4b760) Reply frame received for 5
I0126 22:15:35.742389       8 log.go:172] (0xc000e4b760) Data frame received for 3
I0126 22:15:35.742913       8 log.go:172] (0xc0027f5040) (3) Data frame handling
I0126 22:15:35.743010       8 log.go:172] (0xc0027f5040) (3) Data frame sent
I0126 22:15:35.861289       8 log.go:172] (0xc000e4b760) Data frame received for 1
I0126 22:15:35.861814       8 log.go:172] (0xc000e4b760) (0xc002427860) Stream removed, broadcasting: 5
I0126 22:15:35.861992       8 log.go:172] (0xc002427720) (1) Data frame handling
I0126 22:15:35.862255       8 log.go:172] (0xc002427720) (1) Data frame sent
I0126 22:15:35.862361       8 log.go:172] (0xc000e4b760) (0xc0027f5040) Stream removed, broadcasting: 3
I0126 22:15:35.862519       8 log.go:172] (0xc000e4b760) (0xc002427720) Stream removed, broadcasting: 1
I0126 22:15:35.862636       8 log.go:172] (0xc000e4b760) Go away received
I0126 22:15:35.863551       8 log.go:172] (0xc000e4b760) (0xc002427720) Stream removed, broadcasting: 1
I0126 22:15:35.863594       8 log.go:172] (0xc000e4b760) (0xc0027f5040) Stream removed, broadcasting: 3
I0126 22:15:35.863687       8 log.go:172] (0xc000e4b760) (0xc002427860) Stream removed, broadcasting: 5
Jan 26 22:15:35.864: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:15:35.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8525" for this suite.

• [SLOW TEST:37.036 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":192,"skipped":3190,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:15:35.890: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 26 22:15:36.075: INFO: Waiting up to 5m0s for pod "pod-53b90e8e-1d1a-4304-9e7e-7ad6aeec61d2" in namespace "emptydir-2222" to be "success or failure"
Jan 26 22:15:36.125: INFO: Pod "pod-53b90e8e-1d1a-4304-9e7e-7ad6aeec61d2": Phase="Pending", Reason="", readiness=false. Elapsed: 50.778515ms
Jan 26 22:15:38.135: INFO: Pod "pod-53b90e8e-1d1a-4304-9e7e-7ad6aeec61d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060423799s
Jan 26 22:15:40.142: INFO: Pod "pod-53b90e8e-1d1a-4304-9e7e-7ad6aeec61d2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067800183s
Jan 26 22:15:42.163: INFO: Pod "pod-53b90e8e-1d1a-4304-9e7e-7ad6aeec61d2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088073729s
Jan 26 22:15:44.182: INFO: Pod "pod-53b90e8e-1d1a-4304-9e7e-7ad6aeec61d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.10713872s
STEP: Saw pod success
Jan 26 22:15:44.182: INFO: Pod "pod-53b90e8e-1d1a-4304-9e7e-7ad6aeec61d2" satisfied condition "success or failure"
Jan 26 22:15:44.212: INFO: Trying to get logs from node jerma-node pod pod-53b90e8e-1d1a-4304-9e7e-7ad6aeec61d2 container test-container: 
STEP: delete the pod
Jan 26 22:15:44.403: INFO: Waiting for pod pod-53b90e8e-1d1a-4304-9e7e-7ad6aeec61d2 to disappear
Jan 26 22:15:44.424: INFO: Pod pod-53b90e8e-1d1a-4304-9e7e-7ad6aeec61d2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:15:44.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2222" for this suite.

• [SLOW TEST:8.564 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":193,"skipped":3267,"failed":0}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:15:44.456: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1444
STEP: creating an pod
Jan 26 22:15:45.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-9150 -- logs-generator --log-lines-total 100 --run-duration 20s'
Jan 26 22:15:48.913: INFO: stderr: ""
Jan 26 22:15:48.913: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Waiting for log generator to start.
Jan 26 22:15:48.913: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
Jan 26 22:15:48.913: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-9150" to be "running and ready, or succeeded"
Jan 26 22:15:48.987: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 73.281147ms
Jan 26 22:15:50.994: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080843794s
Jan 26 22:15:52.999: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085916623s
Jan 26 22:15:55.008: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094318029s
Jan 26 22:15:57.020: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 8.106393448s
Jan 26 22:15:57.020: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
Jan 26 22:15:57.020: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
Jan 26 22:15:57.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9150'
Jan 26 22:15:57.179: INFO: stderr: ""
Jan 26 22:15:57.179: INFO: stdout: "I0126 22:15:55.404387       1 logs_generator.go:76] 0 PUT /api/v1/namespaces/default/pods/wf5 299\nI0126 22:15:55.604625       1 logs_generator.go:76] 1 POST /api/v1/namespaces/ns/pods/2svd 479\nI0126 22:15:55.804724       1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/gnh6 348\nI0126 22:15:56.004582       1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/ccw 261\nI0126 22:15:56.204620       1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/zdt 494\nI0126 22:15:56.404609       1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/bcv 367\nI0126 22:15:56.604586       1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/d2t 469\nI0126 22:15:56.804677       1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/zw4 562\nI0126 22:15:57.005013       1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/58nc 390\n"
STEP: limiting log lines
Jan 26 22:15:57.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9150 --tail=1'
Jan 26 22:15:57.308: INFO: stderr: ""
Jan 26 22:15:57.308: INFO: stdout: "I0126 22:15:57.206382       1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/dj5 315\n"
Jan 26 22:15:57.308: INFO: got output "I0126 22:15:57.206382       1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/dj5 315\n"
STEP: limiting log bytes
Jan 26 22:15:57.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9150 --limit-bytes=1'
Jan 26 22:15:57.431: INFO: stderr: ""
Jan 26 22:15:57.431: INFO: stdout: "I"
Jan 26 22:15:57.431: INFO: got output "I"
STEP: exposing timestamps
Jan 26 22:15:57.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9150 --tail=1 --timestamps'
Jan 26 22:15:57.527: INFO: stderr: ""
Jan 26 22:15:57.527: INFO: stdout: "2020-01-26T22:15:57.407637075Z I0126 22:15:57.404743       1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/5cwk 425\n"
Jan 26 22:15:57.527: INFO: got output "2020-01-26T22:15:57.407637075Z I0126 22:15:57.404743       1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/5cwk 425\n"
STEP: restricting to a time range
Jan 26 22:16:00.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9150 --since=1s'
Jan 26 22:16:00.204: INFO: stderr: ""
Jan 26 22:16:00.205: INFO: stdout: "I0126 22:15:59.204609       1 logs_generator.go:76] 19 GET /api/v1/namespaces/kube-system/pods/tt4d 410\nI0126 22:15:59.404621       1 logs_generator.go:76] 20 GET /api/v1/namespaces/kube-system/pods/8bj 412\nI0126 22:15:59.604524       1 logs_generator.go:76] 21 POST /api/v1/namespaces/kube-system/pods/9gxj 316\nI0126 22:15:59.804623       1 logs_generator.go:76] 22 PUT /api/v1/namespaces/ns/pods/k6sj 232\nI0126 22:16:00.004656       1 logs_generator.go:76] 23 POST /api/v1/namespaces/default/pods/826 275\n"
Jan 26 22:16:00.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9150 --since=24h'
Jan 26 22:16:00.337: INFO: stderr: ""
Jan 26 22:16:00.337: INFO: stdout: "I0126 22:15:55.404387       1 logs_generator.go:76] 0 PUT /api/v1/namespaces/default/pods/wf5 299\nI0126 22:15:55.604625       1 logs_generator.go:76] 1 POST /api/v1/namespaces/ns/pods/2svd 479\nI0126 22:15:55.804724       1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/gnh6 348\nI0126 22:15:56.004582       1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/ccw 261\nI0126 22:15:56.204620       1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/zdt 494\nI0126 22:15:56.404609       1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/bcv 367\nI0126 22:15:56.604586       1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/d2t 469\nI0126 22:15:56.804677       1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/zw4 562\nI0126 22:15:57.005013       1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/58nc 390\nI0126 22:15:57.206382       1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/dj5 315\nI0126 22:15:57.404743       1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/5cwk 425\nI0126 22:15:57.604607       1 logs_generator.go:76] 11 POST /api/v1/namespaces/ns/pods/7gjf 204\nI0126 22:15:57.804628       1 logs_generator.go:76] 12 POST /api/v1/namespaces/kube-system/pods/l2q 399\nI0126 22:15:58.004540       1 logs_generator.go:76] 13 PUT /api/v1/namespaces/kube-system/pods/glq5 492\nI0126 22:15:58.204656       1 logs_generator.go:76] 14 PUT /api/v1/namespaces/ns/pods/56w5 260\nI0126 22:15:58.404639       1 logs_generator.go:76] 15 PUT /api/v1/namespaces/ns/pods/bwrl 262\nI0126 22:15:58.614665       1 logs_generator.go:76] 16 PUT /api/v1/namespaces/ns/pods/7fb 555\nI0126 22:15:58.814858       1 logs_generator.go:76] 17 PUT /api/v1/namespaces/default/pods/kjz 396\nI0126 22:15:59.004729       1 logs_generator.go:76] 18 GET /api/v1/namespaces/ns/pods/b4t 452\nI0126 22:15:59.204609       1 logs_generator.go:76] 19 GET /api/v1/namespaces/kube-system/pods/tt4d 410\nI0126 22:15:59.404621       1 logs_generator.go:76] 20 GET /api/v1/namespaces/kube-system/pods/8bj 412\nI0126 22:15:59.604524       1 logs_generator.go:76] 21 POST /api/v1/namespaces/kube-system/pods/9gxj 316\nI0126 22:15:59.804623       1 logs_generator.go:76] 22 PUT /api/v1/namespaces/ns/pods/k6sj 232\nI0126 22:16:00.004656       1 logs_generator.go:76] 23 POST /api/v1/namespaces/default/pods/826 275\nI0126 22:16:00.204585       1 logs_generator.go:76] 24 POST /api/v1/namespaces/default/pods/z5xr 528\n"
[AfterEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450
Jan 26 22:16:00.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-9150'
Jan 26 22:16:12.350: INFO: stderr: ""
Jan 26 22:16:12.351: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:16:12.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9150" for this suite.

• [SLOW TEST:27.914 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1440
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":278,"completed":194,"skipped":3275,"failed":0}
SSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:16:12.370: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:16:18.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9223" for this suite.

• [SLOW TEST:6.239 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":195,"skipped":3280,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:16:18.609: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:16:26.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-130" for this suite.

• [SLOW TEST:8.187 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":196,"skipped":3282,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:16:26.797: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 26 22:16:27.019: INFO: Waiting up to 5m0s for pod "pod-63264910-d48c-4c7a-bb86-e1d91bad59e0" in namespace "emptydir-9903" to be "success or failure"
Jan 26 22:16:27.024: INFO: Pod "pod-63264910-d48c-4c7a-bb86-e1d91bad59e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.804345ms
Jan 26 22:16:29.043: INFO: Pod "pod-63264910-d48c-4c7a-bb86-e1d91bad59e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024010078s
Jan 26 22:16:31.051: INFO: Pod "pod-63264910-d48c-4c7a-bb86-e1d91bad59e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031988308s
Jan 26 22:16:33.060: INFO: Pod "pod-63264910-d48c-4c7a-bb86-e1d91bad59e0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041195808s
Jan 26 22:16:35.081: INFO: Pod "pod-63264910-d48c-4c7a-bb86-e1d91bad59e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.062501684s
STEP: Saw pod success
Jan 26 22:16:35.081: INFO: Pod "pod-63264910-d48c-4c7a-bb86-e1d91bad59e0" satisfied condition "success or failure"
Jan 26 22:16:35.087: INFO: Trying to get logs from node jerma-node pod pod-63264910-d48c-4c7a-bb86-e1d91bad59e0 container test-container: 
STEP: delete the pod
Jan 26 22:16:35.147: INFO: Waiting for pod pod-63264910-d48c-4c7a-bb86-e1d91bad59e0 to disappear
Jan 26 22:16:35.157: INFO: Pod pod-63264910-d48c-4c7a-bb86-e1d91bad59e0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:16:35.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9903" for this suite.

• [SLOW TEST:8.371 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":197,"skipped":3302,"failed":0}
SSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:16:35.169: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:16:44.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-2364" for this suite.

• [SLOW TEST:9.583 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":198,"skipped":3312,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:16:44.752: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 26 22:16:45.013: INFO: Number of nodes with available pods: 0
Jan 26 22:16:45.013: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:16:46.966: INFO: Number of nodes with available pods: 0
Jan 26 22:16:46.966: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:16:47.031: INFO: Number of nodes with available pods: 0
Jan 26 22:16:47.031: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:16:48.029: INFO: Number of nodes with available pods: 0
Jan 26 22:16:48.029: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:16:49.059: INFO: Number of nodes with available pods: 0
Jan 26 22:16:49.059: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:16:51.264: INFO: Number of nodes with available pods: 0
Jan 26 22:16:51.264: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:16:52.449: INFO: Number of nodes with available pods: 0
Jan 26 22:16:52.450: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:16:53.974: INFO: Number of nodes with available pods: 0
Jan 26 22:16:53.975: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:16:54.038: INFO: Number of nodes with available pods: 0
Jan 26 22:16:54.039: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:16:55.042: INFO: Number of nodes with available pods: 0
Jan 26 22:16:55.042: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:16:56.039: INFO: Number of nodes with available pods: 2
Jan 26 22:16:56.039: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jan 26 22:16:56.066: INFO: Number of nodes with available pods: 1
Jan 26 22:16:56.066: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:16:57.137: INFO: Number of nodes with available pods: 1
Jan 26 22:16:57.137: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:16:58.080: INFO: Number of nodes with available pods: 1
Jan 26 22:16:58.080: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:16:59.076: INFO: Number of nodes with available pods: 1
Jan 26 22:16:59.076: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:17:00.094: INFO: Number of nodes with available pods: 1
Jan 26 22:17:00.094: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:17:01.098: INFO: Number of nodes with available pods: 1
Jan 26 22:17:01.098: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:17:02.077: INFO: Number of nodes with available pods: 1
Jan 26 22:17:02.077: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:17:03.097: INFO: Number of nodes with available pods: 1
Jan 26 22:17:03.097: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:17:04.082: INFO: Number of nodes with available pods: 1
Jan 26 22:17:04.082: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:17:05.078: INFO: Number of nodes with available pods: 1
Jan 26 22:17:05.078: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:17:06.083: INFO: Number of nodes with available pods: 1
Jan 26 22:17:06.083: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:17:07.084: INFO: Number of nodes with available pods: 1
Jan 26 22:17:07.084: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:17:08.082: INFO: Number of nodes with available pods: 1
Jan 26 22:17:08.082: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:17:09.081: INFO: Number of nodes with available pods: 1
Jan 26 22:17:09.081: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:17:10.082: INFO: Number of nodes with available pods: 1
Jan 26 22:17:10.082: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:17:11.085: INFO: Number of nodes with available pods: 1
Jan 26 22:17:11.085: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:17:12.082: INFO: Number of nodes with available pods: 1
Jan 26 22:17:12.082: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:17:13.083: INFO: Number of nodes with available pods: 1
Jan 26 22:17:13.083: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:17:14.088: INFO: Number of nodes with available pods: 1
Jan 26 22:17:14.088: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:17:15.085: INFO: Number of nodes with available pods: 1
Jan 26 22:17:15.085: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:17:16.079: INFO: Number of nodes with available pods: 1
Jan 26 22:17:16.079: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:17:17.107: INFO: Number of nodes with available pods: 1
Jan 26 22:17:17.107: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:17:18.085: INFO: Number of nodes with available pods: 1
Jan 26 22:17:18.085: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:17:19.082: INFO: Number of nodes with available pods: 1
Jan 26 22:17:19.082: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:17:20.113: INFO: Number of nodes with available pods: 1
Jan 26 22:17:20.113: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:17:21.084: INFO: Number of nodes with available pods: 2
Jan 26 22:17:21.084: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1809, will wait for the garbage collector to delete the pods
Jan 26 22:17:21.181: INFO: Deleting DaemonSet.extensions daemon-set took: 38.465893ms
Jan 26 22:17:21.582: INFO: Terminating DaemonSet.extensions daemon-set pods took: 401.340318ms
Jan 26 22:17:33.190: INFO: Number of nodes with available pods: 0
Jan 26 22:17:33.190: INFO: Number of running nodes: 0, number of available pods: 0
Jan 26 22:17:33.193: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1809/daemonsets","resourceVersion":"4551812"},"items":null}

Jan 26 22:17:33.197: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1809/pods","resourceVersion":"4551812"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:17:33.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1809" for this suite.

• [SLOW TEST:48.487 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":199,"skipped":3320,"failed":0}
SS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:17:33.240: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's args
Jan 26 22:17:33.314: INFO: Waiting up to 5m0s for pod "var-expansion-f0c91ded-8d80-4cae-99a6-268c905af0b5" in namespace "var-expansion-2891" to be "success or failure"
Jan 26 22:17:33.318: INFO: Pod "var-expansion-f0c91ded-8d80-4cae-99a6-268c905af0b5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.305531ms
Jan 26 22:17:35.327: INFO: Pod "var-expansion-f0c91ded-8d80-4cae-99a6-268c905af0b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013434475s
Jan 26 22:17:37.336: INFO: Pod "var-expansion-f0c91ded-8d80-4cae-99a6-268c905af0b5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021922313s
Jan 26 22:17:39.366: INFO: Pod "var-expansion-f0c91ded-8d80-4cae-99a6-268c905af0b5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052148342s
Jan 26 22:17:41.371: INFO: Pod "var-expansion-f0c91ded-8d80-4cae-99a6-268c905af0b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.057543248s
STEP: Saw pod success
Jan 26 22:17:41.371: INFO: Pod "var-expansion-f0c91ded-8d80-4cae-99a6-268c905af0b5" satisfied condition "success or failure"
Jan 26 22:17:41.376: INFO: Trying to get logs from node jerma-node pod var-expansion-f0c91ded-8d80-4cae-99a6-268c905af0b5 container dapi-container: 
STEP: delete the pod
Jan 26 22:17:41.483: INFO: Waiting for pod var-expansion-f0c91ded-8d80-4cae-99a6-268c905af0b5 to disappear
Jan 26 22:17:41.557: INFO: Pod var-expansion-f0c91ded-8d80-4cae-99a6-268c905af0b5 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:17:41.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2891" for this suite.

• [SLOW TEST:8.349 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":200,"skipped":3322,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:17:41.590: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 26 22:17:42.612: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673862, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673862, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673862, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673862, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 22:17:44.624: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673862, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673862, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673862, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673862, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 22:17:46.619: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673862, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673862, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673862, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673862, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 22:17:48.621: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673862, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673862, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673862, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673862, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 26 22:17:51.705: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:18:02.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2673" for this suite.
STEP: Destroying namespace "webhook-2673-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:20.715 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":201,"skipped":3332,"failed":0}
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:18:02.306: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on tmpfs
Jan 26 22:18:02.427: INFO: Waiting up to 5m0s for pod "pod-41051c84-e7f5-4712-a451-7297eb0499c2" in namespace "emptydir-7607" to be "success or failure"
Jan 26 22:18:02.472: INFO: Pod "pod-41051c84-e7f5-4712-a451-7297eb0499c2": Phase="Pending", Reason="", readiness=false. Elapsed: 44.827064ms
Jan 26 22:18:04.482: INFO: Pod "pod-41051c84-e7f5-4712-a451-7297eb0499c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054809822s
Jan 26 22:18:06.496: INFO: Pod "pod-41051c84-e7f5-4712-a451-7297eb0499c2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068721994s
Jan 26 22:18:08.507: INFO: Pod "pod-41051c84-e7f5-4712-a451-7297eb0499c2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079850997s
Jan 26 22:18:10.519: INFO: Pod "pod-41051c84-e7f5-4712-a451-7297eb0499c2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.09226938s
Jan 26 22:18:12.531: INFO: Pod "pod-41051c84-e7f5-4712-a451-7297eb0499c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.104396796s
STEP: Saw pod success
Jan 26 22:18:12.532: INFO: Pod "pod-41051c84-e7f5-4712-a451-7297eb0499c2" satisfied condition "success or failure"
Jan 26 22:18:12.539: INFO: Trying to get logs from node jerma-node pod pod-41051c84-e7f5-4712-a451-7297eb0499c2 container test-container: 
STEP: delete the pod
Jan 26 22:18:12.599: INFO: Waiting for pod pod-41051c84-e7f5-4712-a451-7297eb0499c2 to disappear
Jan 26 22:18:12.635: INFO: Pod pod-41051c84-e7f5-4712-a451-7297eb0499c2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:18:12.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7607" for this suite.

• [SLOW TEST:10.394 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":202,"skipped":3339,"failed":0}
SSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:18:12.701: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-4646
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-4646
I0126 22:18:12.849067       8 runners.go:189] Created replication controller with name: externalname-service, namespace: services-4646, replica count: 2
I0126 22:18:15.900238       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0126 22:18:18.900797       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0126 22:18:21.901313       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0126 22:18:24.901880       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 26 22:18:24.902: INFO: Creating new exec pod
Jan 26 22:18:33.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4646 execpod6fgdr -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Jan 26 22:18:34.381: INFO: stderr: "I0126 22:18:34.173056    4126 log.go:172] (0xc00099ac60) (0xc000a7a280) Create stream\nI0126 22:18:34.173329    4126 log.go:172] (0xc00099ac60) (0xc000a7a280) Stream added, broadcasting: 1\nI0126 22:18:34.176538    4126 log.go:172] (0xc00099ac60) Reply frame received for 1\nI0126 22:18:34.176585    4126 log.go:172] (0xc00099ac60) (0xc000974500) Create stream\nI0126 22:18:34.176597    4126 log.go:172] (0xc00099ac60) (0xc000974500) Stream added, broadcasting: 3\nI0126 22:18:34.178174    4126 log.go:172] (0xc00099ac60) Reply frame received for 3\nI0126 22:18:34.178327    4126 log.go:172] (0xc00099ac60) (0xc0009745a0) Create stream\nI0126 22:18:34.178350    4126 log.go:172] (0xc00099ac60) (0xc0009745a0) Stream added, broadcasting: 5\nI0126 22:18:34.182982    4126 log.go:172] (0xc00099ac60) Reply frame received for 5\nI0126 22:18:34.263617    4126 log.go:172] (0xc00099ac60) Data frame received for 5\nI0126 22:18:34.263693    4126 log.go:172] (0xc0009745a0) (5) Data frame handling\nI0126 22:18:34.263739    4126 log.go:172] (0xc0009745a0) (5) Data frame sent\nI0126 22:18:34.263760    4126 log.go:172] (0xc00099ac60) Data frame received for 5\nI0126 22:18:34.263772    4126 log.go:172] (0xc0009745a0) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nI0126 22:18:34.263816    4126 log.go:172] (0xc0009745a0) (5) Data frame sent\nI0126 22:18:34.274793    4126 log.go:172] (0xc00099ac60) Data frame received for 5\nI0126 22:18:34.274831    4126 log.go:172] (0xc0009745a0) (5) Data frame handling\nI0126 22:18:34.274853    4126 log.go:172] (0xc0009745a0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0126 22:18:34.367439    4126 log.go:172] (0xc00099ac60) (0xc000974500) Stream removed, broadcasting: 3\nI0126 22:18:34.367754    4126 log.go:172] (0xc00099ac60) Data frame received for 1\nI0126 22:18:34.367918    4126 log.go:172] (0xc00099ac60) (0xc0009745a0) Stream removed, broadcasting: 5\nI0126 22:18:34.367993    4126 log.go:172] (0xc000a7a280) (1) Data frame handling\nI0126 22:18:34.368048    4126 log.go:172] (0xc000a7a280) (1) Data frame sent\nI0126 22:18:34.368059    4126 log.go:172] (0xc00099ac60) (0xc000a7a280) Stream removed, broadcasting: 1\nI0126 22:18:34.368277    4126 log.go:172] (0xc00099ac60) Go away received\nI0126 22:18:34.369188    4126 log.go:172] (0xc00099ac60) (0xc000a7a280) Stream removed, broadcasting: 1\nI0126 22:18:34.369211    4126 log.go:172] (0xc00099ac60) (0xc000974500) Stream removed, broadcasting: 3\nI0126 22:18:34.369228    4126 log.go:172] (0xc00099ac60) (0xc0009745a0) Stream removed, broadcasting: 5\n"
Jan 26 22:18:34.381: INFO: stdout: ""
Jan 26 22:18:34.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4646 execpod6fgdr -- /bin/sh -x -c nc -zv -t -w 2 10.96.36.140 80'
Jan 26 22:18:34.750: INFO: stderr: "I0126 22:18:34.600748    4146 log.go:172] (0xc00096b080) (0xc000a68140) Create stream\nI0126 22:18:34.601003    4146 log.go:172] (0xc00096b080) (0xc000a68140) Stream added, broadcasting: 1\nI0126 22:18:34.613602    4146 log.go:172] (0xc00096b080) Reply frame received for 1\nI0126 22:18:34.613690    4146 log.go:172] (0xc00096b080) (0xc0009ae0a0) Create stream\nI0126 22:18:34.613705    4146 log.go:172] (0xc00096b080) (0xc0009ae0a0) Stream added, broadcasting: 3\nI0126 22:18:34.614857    4146 log.go:172] (0xc00096b080) Reply frame received for 3\nI0126 22:18:34.614908    4146 log.go:172] (0xc00096b080) (0xc000a681e0) Create stream\nI0126 22:18:34.614920    4146 log.go:172] (0xc00096b080) (0xc000a681e0) Stream added, broadcasting: 5\nI0126 22:18:34.616244    4146 log.go:172] (0xc00096b080) Reply frame received for 5\nI0126 22:18:34.681314    4146 log.go:172] (0xc00096b080) Data frame received for 5\nI0126 22:18:34.681371    4146 log.go:172] (0xc000a681e0) (5) Data frame handling\nI0126 22:18:34.681382    4146 log.go:172] (0xc000a681e0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.36.140 80\nI0126 22:18:34.682784    4146 log.go:172] (0xc00096b080) Data frame received for 5\nI0126 22:18:34.682801    4146 log.go:172] (0xc000a681e0) (5) Data frame handling\nI0126 22:18:34.682809    4146 log.go:172] (0xc000a681e0) (5) Data frame sent\nConnection to 10.96.36.140 80 port [tcp/http] succeeded!\nI0126 22:18:34.740407    4146 log.go:172] (0xc00096b080) Data frame received for 1\nI0126 22:18:34.740526    4146 log.go:172] (0xc000a68140) (1) Data frame handling\nI0126 22:18:34.740547    4146 log.go:172] (0xc000a68140) (1) Data frame sent\nI0126 22:18:34.740596    4146 log.go:172] (0xc00096b080) (0xc000a68140) Stream removed, broadcasting: 1\nI0126 22:18:34.742306    4146 log.go:172] (0xc00096b080) (0xc0009ae0a0) Stream removed, broadcasting: 3\nI0126 22:18:34.742413    4146 log.go:172] (0xc00096b080) (0xc000a681e0) Stream removed, broadcasting: 5\nI0126 22:18:34.742437    4146 log.go:172] (0xc00096b080) Go away received\nI0126 22:18:34.742506    4146 log.go:172] (0xc00096b080) (0xc000a68140) Stream removed, broadcasting: 1\nI0126 22:18:34.742532    4146 log.go:172] (0xc00096b080) (0xc0009ae0a0) Stream removed, broadcasting: 3\nI0126 22:18:34.742572    4146 log.go:172] (0xc00096b080) (0xc000a681e0) Stream removed, broadcasting: 5\n"
Jan 26 22:18:34.751: INFO: stdout: ""
Jan 26 22:18:34.751: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:18:34.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4646" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:22.128 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":203,"skipped":3343,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:18:34.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 26 22:18:35.467: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 26 22:18:37.488: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673915, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673915, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673915, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673915, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 22:18:39.499: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673915, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673915, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673915, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673915, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 22:18:41.495: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673915, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673915, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673915, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673915, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 22:18:44.051: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673915, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673915, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673915, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715673915, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 26 22:18:47.650: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 26 22:18:47.662: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the custom resource webhook via the AdmissionRegistration API
STEP: Creating a custom resource that should be denied by the webhook
STEP: Creating a custom resource whose deletion would be denied by the webhook
STEP: Updating the custom resource with disallowed data should be denied
STEP: Deleting the custom resource should be denied
STEP: Remove the offending key and value from the custom resource data
STEP: Deleting the updated custom resource should be successful
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:18:49.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6385" for this suite.
STEP: Destroying namespace "webhook-6385-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:14.328 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":204,"skipped":3381,"failed":0}
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:18:49.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-map-5c13b98f-5d49-4479-b6a0-eeeec0182e75
STEP: Creating a pod to test consume secrets
Jan 26 22:18:49.263: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9596e046-a2ce-4048-bbe0-aa8a331e52fc" in namespace "projected-8778" to be "success or failure"
Jan 26 22:18:49.267: INFO: Pod "pod-projected-secrets-9596e046-a2ce-4048-bbe0-aa8a331e52fc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.378491ms
Jan 26 22:18:51.275: INFO: Pod "pod-projected-secrets-9596e046-a2ce-4048-bbe0-aa8a331e52fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01116038s
Jan 26 22:18:53.282: INFO: Pod "pod-projected-secrets-9596e046-a2ce-4048-bbe0-aa8a331e52fc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018475032s
Jan 26 22:18:55.289: INFO: Pod "pod-projected-secrets-9596e046-a2ce-4048-bbe0-aa8a331e52fc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.025746304s
Jan 26 22:18:57.297: INFO: Pod "pod-projected-secrets-9596e046-a2ce-4048-bbe0-aa8a331e52fc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.034117957s
Jan 26 22:18:59.306: INFO: Pod "pod-projected-secrets-9596e046-a2ce-4048-bbe0-aa8a331e52fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.042458659s
STEP: Saw pod success
Jan 26 22:18:59.306: INFO: Pod "pod-projected-secrets-9596e046-a2ce-4048-bbe0-aa8a331e52fc" satisfied condition "success or failure"
Jan 26 22:18:59.310: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-9596e046-a2ce-4048-bbe0-aa8a331e52fc container projected-secret-volume-test: 
STEP: delete the pod
Jan 26 22:18:59.381: INFO: Waiting for pod pod-projected-secrets-9596e046-a2ce-4048-bbe0-aa8a331e52fc to disappear
Jan 26 22:18:59.386: INFO: Pod pod-projected-secrets-9596e046-a2ce-4048-bbe0-aa8a331e52fc no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:18:59.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8778" for this suite.

• [SLOW TEST:10.261 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":205,"skipped":3383,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:18:59.423: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:18:59.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-4560" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":206,"skipped":3412,"failed":0}
SSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:18:59.569: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-1447
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-1447
STEP: Creating statefulset with conflicting port in namespace statefulset-1447
STEP: Waiting until pod test-pod will start running in namespace statefulset-1447
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1447
Jan 26 22:19:07.805: INFO: Observed stateful pod in namespace: statefulset-1447, name: ss-0, uid: 440bd979-88d7-4816-b0c2-a20b8620aa97, status phase: Pending. Waiting for statefulset controller to delete.
Jan 26 22:24:07.811: FAIL: Pod ss-0 expected to be re-created at least once

Full Stack Trace
k8s.io/kubernetes/test/e2e/apps.glob..func10.2.12()
	/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:742 +0x11ba
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002001300)
	_output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:110 +0x30a
k8s.io/kubernetes/test/e2e.TestE2E(0xc002001300)
	_output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:112 +0x2b
testing.tRunner(0xc002001300, 0x4c30de8)
	/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:960 +0x350
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jan 26 22:24:07.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe po ss-0 --namespace=statefulset-1447'
Jan 26 22:24:08.068: INFO: stderr: ""
Jan 26 22:24:08.068: INFO: stdout: "Name:           ss-0\nNamespace:      statefulset-1447\nPriority:       0\nNode:           jerma-node/\nLabels:         baz=blah\n                controller-revision-hash=ss-5c959bc8d4\n                foo=bar\n                statefulset.kubernetes.io/pod-name=ss-0\nAnnotations:    \nStatus:         Pending\nIP:             \nIPs:            \nControlled By:  StatefulSet/ss\nContainers:\n  webserver:\n    Image:        docker.io/library/httpd:2.4.38-alpine\n    Port:         21017/TCP\n    Host Port:    21017/TCP\n    Environment:  \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-62cvz (ro)\nVolumes:\n  default-token-62cvz:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-62cvz\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type     Reason            Age   From                 Message\n  ----     ------            ----  ----                 -------\n  Warning  PodFitsHostPorts  5m6s  kubelet, jerma-node  Predicate PodFitsHostPorts failed\n"
Jan 26 22:24:08.068: INFO: 
Output of kubectl describe ss-0:
Name:           ss-0
Namespace:      statefulset-1447
Priority:       0
Node:           jerma-node/
Labels:         baz=blah
                controller-revision-hash=ss-5c959bc8d4
                foo=bar
                statefulset.kubernetes.io/pod-name=ss-0
Annotations:    
Status:         Pending
IP:             
IPs:            
Controlled By:  StatefulSet/ss
Containers:
  webserver:
    Image:        docker.io/library/httpd:2.4.38-alpine
    Port:         21017/TCP
    Host Port:    21017/TCP
    Environment:  
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-62cvz (ro)
Volumes:
  default-token-62cvz:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-62cvz
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age   From                 Message
  ----     ------            ----  ----                 -------
  Warning  PodFitsHostPorts  5m6s  kubelet, jerma-node  Predicate PodFitsHostPorts failed

Jan 26 22:24:08.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs ss-0 --namespace=statefulset-1447 --tail=100'
Jan 26 22:24:08.297: INFO: rc: 1
Jan 26 22:24:08.298: INFO: 
Last 100 log lines of ss-0:

Jan 26 22:24:08.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe po test-pod --namespace=statefulset-1447'
Jan 26 22:24:08.501: INFO: stderr: ""
Jan 26 22:24:08.501: INFO: stdout: "Name:         test-pod\nNamespace:    statefulset-1447\nPriority:     0\nNode:         jerma-node/10.96.2.250\nStart Time:   Sun, 26 Jan 2020 22:18:59 +0000\nLabels:       \nAnnotations:  \nStatus:       Running\nIP:           10.44.0.1\nIPs:\n  IP:  10.44.0.1\nContainers:\n  webserver:\n    Container ID:   docker://47560b914f9a8f0b4738631e85fe839c822ba39e8ccbc77bb419e7bc488f47a6\n    Image:          docker.io/library/httpd:2.4.38-alpine\n    Image ID:       docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\n    Port:           21017/TCP\n    Host Port:      21017/TCP\n    State:          Running\n      Started:      Sun, 26 Jan 2020 22:19:06 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-62cvz (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-62cvz:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-62cvz\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason   Age   From                 Message\n  ----    ------   ----  ----                 -------\n  Normal  Pulled   5m5s  kubelet, jerma-node  Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\n  Normal  Created  5m2s  kubelet, jerma-node  Created container webserver\n  Normal  Started  5m2s  kubelet, jerma-node  Started container webserver\n"
Jan 26 22:24:08.501: INFO: 
Output of kubectl describe test-pod:
Name:         test-pod
Namespace:    statefulset-1447
Priority:     0
Node:         jerma-node/10.96.2.250
Start Time:   Sun, 26 Jan 2020 22:18:59 +0000
Labels:       
Annotations:  
Status:       Running
IP:           10.44.0.1
IPs:
  IP:  10.44.0.1
Containers:
  webserver:
    Container ID:   docker://47560b914f9a8f0b4738631e85fe839c822ba39e8ccbc77bb419e7bc488f47a6
    Image:          docker.io/library/httpd:2.4.38-alpine
    Image ID:       docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060
    Port:           21017/TCP
    Host Port:      21017/TCP
    State:          Running
      Started:      Sun, 26 Jan 2020 22:19:06 +0000
    Ready:          True
    Restart Count:  0
    Environment:    
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-62cvz (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-62cvz:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-62cvz
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason   Age   From                 Message
  ----    ------   ----  ----                 -------
  Normal  Pulled   5m5s  kubelet, jerma-node  Container image "docker.io/library/httpd:2.4.38-alpine" already present on machine
  Normal  Created  5m2s  kubelet, jerma-node  Created container webserver
  Normal  Started  5m2s  kubelet, jerma-node  Started container webserver

Jan 26 22:24:08.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs test-pod --namespace=statefulset-1447 --tail=100'
Jan 26 22:24:08.685: INFO: stderr: ""
Jan 26 22:24:08.686: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\n[Sun Jan 26 22:19:06.488427 2020] [mpm_event:notice] [pid 1:tid 140537277434728] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Sun Jan 26 22:19:06.488563 2020] [core:notice] [pid 1:tid 140537277434728] AH00094: Command line: 'httpd -D FOREGROUND'\n"
Jan 26 22:24:08.686: INFO: 
Last 100 log lines of test-pod:
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message
[Sun Jan 26 22:19:06.488427 2020] [mpm_event:notice] [pid 1:tid 140537277434728] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations
[Sun Jan 26 22:19:06.488563 2020] [core:notice] [pid 1:tid 140537277434728] AH00094: Command line: 'httpd -D FOREGROUND'

Jan 26 22:24:08.686: INFO: Deleting all statefulset in ns statefulset-1447
Jan 26 22:24:08.691: INFO: Scaling statefulset ss to 0
Jan 26 22:24:18.721: INFO: Waiting for statefulset status.replicas updated to 0
Jan 26 22:24:18.727: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
STEP: Collecting events from namespace "statefulset-1447".
STEP: Found 12 events.
Jan 26 22:24:18.751: INFO: At 2020-01-26 22:18:59 +0000 UTC - event for ss: {statefulset-controller } RecreatingFailedPod: StatefulSet statefulset-1447/ss is recreating failed Pod ss-0
Jan 26 22:24:18.751: INFO: At 2020-01-26 22:18:59 +0000 UTC - event for ss: {statefulset-controller } SuccessfulDelete: delete Pod ss-0 in StatefulSet ss successful
Jan 26 22:24:18.751: INFO: At 2020-01-26 22:18:59 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Pod ss-0 in StatefulSet ss successful
Jan 26 22:24:18.751: INFO: At 2020-01-26 22:18:59 +0000 UTC - event for ss-0: {kubelet jerma-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Jan 26 22:24:18.751: INFO: At 2020-01-26 22:18:59 +0000 UTC - event for ss-0: {kubelet jerma-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Jan 26 22:24:18.751: INFO: At 2020-01-26 22:19:00 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: The POST operation against Pod could not be completed at this time, please try again.
Jan 26 22:24:18.751: INFO: At 2020-01-26 22:19:00 +0000 UTC - event for ss-0: {kubelet jerma-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Jan 26 22:24:18.751: INFO: At 2020-01-26 22:19:02 +0000 UTC - event for ss-0: {kubelet jerma-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Jan 26 22:24:18.751: INFO: At 2020-01-26 22:19:02 +0000 UTC - event for ss-0: {kubelet jerma-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Jan 26 22:24:18.751: INFO: At 2020-01-26 22:19:03 +0000 UTC - event for test-pod: {kubelet jerma-node} Pulled: Container image "docker.io/library/httpd:2.4.38-alpine" already present on machine
Jan 26 22:24:18.751: INFO: At 2020-01-26 22:19:06 +0000 UTC - event for test-pod: {kubelet jerma-node} Created: Created container webserver
Jan 26 22:24:18.751: INFO: At 2020-01-26 22:19:06 +0000 UTC - event for test-pod: {kubelet jerma-node} Started: Started container webserver
Jan 26 22:24:18.759: INFO: POD       NODE        PHASE    GRACE  CONDITIONS
Jan 26 22:24:18.759: INFO: test-pod  jerma-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 22:18:59 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 22:19:07 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 22:19:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 22:18:59 +0000 UTC  }]
Jan 26 22:24:18.759: INFO: 
Jan 26 22:24:18.765: INFO: 
Logging node info for node jerma-node
Jan 26 22:24:18.788: INFO: Node Info: &Node{ObjectMeta:{jerma-node   /api/v1/nodes/jerma-node 6236bfb4-6b64-4c0a-82c6-f768ceeab07c 4552814 0 2020-01-04 11:59:52 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-node kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-04 12:00:49 +0000 UTC,LastTransitionTime:2020-01-04 12:00:49 +0000 UTC,Reason:WeaveIsUp,Message:Weave pod has set this,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-26 22:22:12 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-26 22:22:12 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-26 22:22:12 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-26 22:22:12 +0000 UTC,LastTransitionTime:2020-01-04 12:00:52 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.96.2.250,},NodeAddress{Type:Hostname,Address:jerma-node,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bdc16344252549dd902c3a5d68b22f41,SystemUUID:BDC16344-2525-49DD-902C-3A5D68B22F41,BootID:eec61fc4-8bf6-487f-8f93-ea9731fe757a,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3],SizeBytes:288426917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:b2ba9441af30261465e5c41be63e462d0050b09ad280001ae731f399b2b00b75 k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:115960823,},ContainerImage{Names:[weaveworks/weave-kube@sha256:e4a3a5b9bf605a7ff5ad5473c7493d7e30cbd1ed14c9c2630a4e409b4dbfab1c weaveworks/weave-kube:2.6.0],SizeBytes:114348932,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:61365829,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[weaveworks/weave-npc@sha256:985de9ff201677a85ce78703c515466fe45c9c73da6ee21821e89d902c21daf8 weaveworks/weave-npc:2.6.0],SizeBytes:34949961,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 appropriate/curl:latest],SizeBytes:5496756,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a busybox:latest],SizeBytes:1219782,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest],SizeBytes:239840,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jan 26 22:24:18.790: INFO: 
Logging kubelet events for node jerma-node
Jan 26 22:24:18.793: INFO: 
Logging pods the kubelet thinks is on node jerma-node
Jan 26 22:24:18.802: INFO: kube-proxy-dsf66 started at 2020-01-04 11:59:52 +0000 UTC (0+1 container statuses recorded)
Jan 26 22:24:18.802: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 26 22:24:18.802: INFO: weave-net-kz8lv started at 2020-01-04 11:59:52 +0000 UTC (0+2 container statuses recorded)
Jan 26 22:24:18.802: INFO: 	Container weave ready: true, restart count 1
Jan 26 22:24:18.802: INFO: 	Container weave-npc ready: true, restart count 0
Jan 26 22:24:18.802: INFO: test-pod started at 2020-01-26 22:18:59 +0000 UTC (0+1 container statuses recorded)
Jan 26 22:24:18.802: INFO: 	Container webserver ready: true, restart count 0
W0126 22:24:18.807819       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 26 22:24:18.856: INFO: 
Latency metrics for node jerma-node
Jan 26 22:24:18.856: INFO: 
Logging node info for node jerma-server-mvvl6gufaqub
Jan 26 22:24:18.865: INFO: Node Info: &Node{ObjectMeta:{jerma-server-mvvl6gufaqub   /api/v1/nodes/jerma-server-mvvl6gufaqub a2a7fe9b-7d59-43f1-bbe3-2a69f99cabd2 4552904 0 2020-01-04 11:47:40 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-server-mvvl6gufaqub kubernetes.io/os:linux node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-04 11:48:36 +0000 UTC,LastTransitionTime:2020-01-04 11:48:36 +0000 UTC,Reason:WeaveIsUp,Message:Weave pod has set this,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-26 22:22:51 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-26 22:22:51 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-26 22:22:51 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-26 22:22:51 +0000 UTC,LastTransitionTime:2020-01-04 11:48:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.96.1.234,},NodeAddress{Type:Hostname,Address:jerma-server-mvvl6gufaqub,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3f0346566ad342efb0c9f55677d0a8ea,SystemUUID:3F034656-6AD3-42EF-B0C9-F55677D0A8EA,BootID:87dae5d0-e99d-4d31-a4e7-fbd07d84e951,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:e3ec33d533257902ad9ebe3d399c17710e62009201a7202aec941e351545d662 k8s.gcr.io/kube-apiserver:v1.17.0],SizeBytes:170957331,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:0438efb5098a2ca634ea8c6b0d804742b733d0d13fd53cf62c73e32c659a3c39 k8s.gcr.io/kube-controller-manager:v1.17.0],SizeBytes:160877075,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:b2ba9441af30261465e5c41be63e462d0050b09ad280001ae731f399b2b00b75 k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:115960823,},ContainerImage{Names:[weaveworks/weave-kube@sha256:e4a3a5b9bf605a7ff5ad5473c7493d7e30cbd1ed14c9c2630a4e409b4dbfab1c weaveworks/weave-kube:2.6.0],SizeBytes:114348932,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:5215c4216a65f7e76c1895ba951a12dc1c947904a91810fc66a544ff1d7e87db k8s.gcr.io/kube-scheduler:v1.17.0],SizeBytes:94431763,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:7ec975f167d815311a7136c32e70735f0d00b73781365df1befd46ed35bd4fe7 k8s.gcr.io/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[weaveworks/weave-npc@sha256:985de9ff201677a85ce78703c515466fe45c9c73da6ee21821e89d902c21daf8 weaveworks/weave-npc:2.6.0],SizeBytes:34949961,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest],SizeBytes:239840,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jan 26 22:24:18.867: INFO: 
Logging kubelet events for node jerma-server-mvvl6gufaqub
Jan 26 22:24:18.872: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub
Jan 26 22:24:18.903: INFO: kube-apiserver-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:53 +0000 UTC (0+1 container statuses recorded)
Jan 26 22:24:18.903: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan 26 22:24:18.903: INFO: etcd-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:54 +0000 UTC (0+1 container statuses recorded)
Jan 26 22:24:18.903: INFO: 	Container etcd ready: true, restart count 1
Jan 26 22:24:18.903: INFO: coredns-6955765f44-bhnn4 started at 2020-01-04 11:48:47 +0000 UTC (0+1 container statuses recorded)
Jan 26 22:24:18.903: INFO: 	Container coredns ready: true, restart count 0
Jan 26 22:24:18.903: INFO: coredns-6955765f44-bwd85 started at 2020-01-04 11:48:47 +0000 UTC (0+1 container statuses recorded)
Jan 26 22:24:18.903: INFO: 	Container coredns ready: true, restart count 0
Jan 26 22:24:18.904: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:53 +0000 UTC (0+1 container statuses recorded)
Jan 26 22:24:18.904: INFO: 	Container kube-controller-manager ready: true, restart count 3
Jan 26 22:24:18.904: INFO: kube-proxy-chkps started at 2020-01-04 11:48:11 +0000 UTC (0+1 container statuses recorded)
Jan 26 22:24:18.904: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 26 22:24:18.904: INFO: weave-net-z6tjf started at 2020-01-04 11:48:11 +0000 UTC (0+2 container statuses recorded)
Jan 26 22:24:18.904: INFO: 	Container weave ready: true, restart count 0
Jan 26 22:24:18.904: INFO: 	Container weave-npc ready: true, restart count 0
Jan 26 22:24:18.904: INFO: kube-scheduler-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:54 +0000 UTC (0+1 container statuses recorded)
Jan 26 22:24:18.904: INFO: 	Container kube-scheduler ready: true, restart count 4
W0126 22:24:18.914731       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 26 22:24:18.977: INFO: 
Latency metrics for node jerma-server-mvvl6gufaqub
Jan 26 22:24:18.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1447" for this suite.

• Failure [319.420 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Should recreate evicted statefulset [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721

    Jan 26 22:24:07.811: Pod ss-0 expected to be re-created at least once

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:742
------------------------------
{"msg":"FAILED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":206,"skipped":3419,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:24:18.992: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-76a601ad-71f2-4ab9-aabf-339ea55f5a72 in namespace container-probe-8802
Jan 26 22:24:29.205: INFO: Started pod busybox-76a601ad-71f2-4ab9-aabf-339ea55f5a72 in namespace container-probe-8802
STEP: checking the pod's current state and verifying that restartCount is present
Jan 26 22:24:29.209: INFO: Initial restart count of pod busybox-76a601ad-71f2-4ab9-aabf-339ea55f5a72 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:28:29.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8802" for this suite.

• [SLOW TEST:250.585 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":207,"skipped":3435,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:28:29.578: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 26 22:28:29.735: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-6b170840-bf9f-42ef-9ff6-cd1a9918db9e" in namespace "security-context-test-1010" to be "success or failure"
Jan 26 22:28:29.861: INFO: Pod "busybox-privileged-false-6b170840-bf9f-42ef-9ff6-cd1a9918db9e": Phase="Pending", Reason="", readiness=false. Elapsed: 125.479634ms
Jan 26 22:28:31.874: INFO: Pod "busybox-privileged-false-6b170840-bf9f-42ef-9ff6-cd1a9918db9e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13819113s
Jan 26 22:28:33.886: INFO: Pod "busybox-privileged-false-6b170840-bf9f-42ef-9ff6-cd1a9918db9e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.150111312s
Jan 26 22:28:35.913: INFO: Pod "busybox-privileged-false-6b170840-bf9f-42ef-9ff6-cd1a9918db9e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.177541705s
Jan 26 22:28:37.921: INFO: Pod "busybox-privileged-false-6b170840-bf9f-42ef-9ff6-cd1a9918db9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.184864609s
Jan 26 22:28:37.921: INFO: Pod "busybox-privileged-false-6b170840-bf9f-42ef-9ff6-cd1a9918db9e" satisfied condition "success or failure"
Jan 26 22:28:37.959: INFO: Got logs for pod "busybox-privileged-false-6b170840-bf9f-42ef-9ff6-cd1a9918db9e": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:28:37.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1010" for this suite.

• [SLOW TEST:8.395 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a pod with privileged
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:225
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":208,"skipped":3438,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:28:37.973: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service nodeport-test with type=NodePort in namespace services-9
STEP: creating replication controller nodeport-test in namespace services-9
I0126 22:28:38.253536       8 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-9, replica count: 2
I0126 22:28:41.305342       8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0126 22:28:44.306214       8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0126 22:28:47.307238       8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0126 22:28:50.307957       8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 26 22:28:50.308: INFO: Creating new exec pod
Jan 26 22:28:59.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9 execpod7vk9h -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Jan 26 22:29:01.731: INFO: stderr: "I0126 22:29:01.540416    4243 log.go:172] (0xc00073ca50) (0xc0006bfea0) Create stream\nI0126 22:29:01.540503    4243 log.go:172] (0xc00073ca50) (0xc0006bfea0) Stream added, broadcasting: 1\nI0126 22:29:01.545826    4243 log.go:172] (0xc00073ca50) Reply frame received for 1\nI0126 22:29:01.545880    4243 log.go:172] (0xc00073ca50) (0xc0005ca6e0) Create stream\nI0126 22:29:01.545901    4243 log.go:172] (0xc00073ca50) (0xc0005ca6e0) Stream added, broadcasting: 3\nI0126 22:29:01.547509    4243 log.go:172] (0xc00073ca50) Reply frame received for 3\nI0126 22:29:01.547546    4243 log.go:172] (0xc00073ca50) (0xc0004034a0) Create stream\nI0126 22:29:01.547559    4243 log.go:172] (0xc00073ca50) (0xc0004034a0) Stream added, broadcasting: 5\nI0126 22:29:01.549138    4243 log.go:172] (0xc00073ca50) Reply frame received for 5\nI0126 22:29:01.633214    4243 log.go:172] (0xc00073ca50) Data frame received for 5\nI0126 22:29:01.633494    4243 log.go:172] (0xc0004034a0) (5) Data frame handling\nI0126 22:29:01.633586    4243 log.go:172] (0xc0004034a0) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0126 22:29:01.642098    4243 log.go:172] (0xc00073ca50) Data frame received for 5\nI0126 22:29:01.642187    4243 log.go:172] (0xc0004034a0) (5) Data frame handling\nI0126 22:29:01.642205    4243 log.go:172] (0xc0004034a0) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0126 22:29:01.719006    4243 log.go:172] (0xc00073ca50) (0xc0005ca6e0) Stream removed, broadcasting: 3\nI0126 22:29:01.719149    4243 log.go:172] (0xc00073ca50) Data frame received for 1\nI0126 22:29:01.719166    4243 log.go:172] (0xc0006bfea0) (1) Data frame handling\nI0126 22:29:01.719181    4243 log.go:172] (0xc0006bfea0) (1) Data frame sent\nI0126 22:29:01.719239    4243 log.go:172] (0xc00073ca50) (0xc0006bfea0) Stream removed, broadcasting: 1\nI0126 22:29:01.719467    4243 log.go:172] (0xc00073ca50) (0xc0004034a0) Stream removed, broadcasting: 5\nI0126 22:29:01.719569    4243 log.go:172] (0xc00073ca50) Go away received\nI0126 22:29:01.720559    4243 log.go:172] (0xc00073ca50) (0xc0006bfea0) Stream removed, broadcasting: 1\nI0126 22:29:01.720623    4243 log.go:172] (0xc00073ca50) (0xc0005ca6e0) Stream removed, broadcasting: 3\nI0126 22:29:01.720653    4243 log.go:172] (0xc00073ca50) (0xc0004034a0) Stream removed, broadcasting: 5\n"
Jan 26 22:29:01.731: INFO: stdout: ""
Jan 26 22:29:01.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9 execpod7vk9h -- /bin/sh -x -c nc -zv -t -w 2 10.96.44.31 80'
Jan 26 22:29:02.196: INFO: stderr: "I0126 22:29:01.947726    4274 log.go:172] (0xc0009740b0) (0xc0005695e0) Create stream\nI0126 22:29:01.948079    4274 log.go:172] (0xc0009740b0) (0xc0005695e0) Stream added, broadcasting: 1\nI0126 22:29:01.954289    4274 log.go:172] (0xc0009740b0) Reply frame received for 1\nI0126 22:29:01.954350    4274 log.go:172] (0xc0009740b0) (0xc000a78000) Create stream\nI0126 22:29:01.954370    4274 log.go:172] (0xc0009740b0) (0xc000a78000) Stream added, broadcasting: 3\nI0126 22:29:01.956120    4274 log.go:172] (0xc0009740b0) Reply frame received for 3\nI0126 22:29:01.956276    4274 log.go:172] (0xc0009740b0) (0xc000a780a0) Create stream\nI0126 22:29:01.956298    4274 log.go:172] (0xc0009740b0) (0xc000a780a0) Stream added, broadcasting: 5\nI0126 22:29:01.961417    4274 log.go:172] (0xc0009740b0) Reply frame received for 5\nI0126 22:29:02.065137    4274 log.go:172] (0xc0009740b0) Data frame received for 5\nI0126 22:29:02.065537    4274 log.go:172] (0xc000a780a0) (5) Data frame handling\nI0126 22:29:02.065638    4274 log.go:172] (0xc000a780a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.44.31 80\nConnection to 10.96.44.31 80 port [tcp/http] succeeded!\nI0126 22:29:02.177100    4274 log.go:172] (0xc0009740b0) (0xc000a78000) Stream removed, broadcasting: 3\nI0126 22:29:02.177326    4274 log.go:172] (0xc0009740b0) Data frame received for 1\nI0126 22:29:02.177350    4274 log.go:172] (0xc0005695e0) (1) Data frame handling\nI0126 22:29:02.177385    4274 log.go:172] (0xc0005695e0) (1) Data frame sent\nI0126 22:29:02.177479    4274 log.go:172] (0xc0009740b0) (0xc0005695e0) Stream removed, broadcasting: 1\nI0126 22:29:02.177682    4274 log.go:172] (0xc0009740b0) (0xc000a780a0) Stream removed, broadcasting: 5\nI0126 22:29:02.177796    4274 log.go:172] (0xc0009740b0) Go away received\nI0126 22:29:02.178741    4274 log.go:172] (0xc0009740b0) (0xc0005695e0) Stream removed, broadcasting: 1\nI0126 22:29:02.178828    4274 log.go:172] (0xc0009740b0) (0xc000a78000) Stream removed, broadcasting: 3\nI0126 22:29:02.178868    4274 log.go:172] (0xc0009740b0) (0xc000a780a0) Stream removed, broadcasting: 5\n"
Jan 26 22:29:02.196: INFO: stdout: ""
Jan 26 22:29:02.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9 execpod7vk9h -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 30505'
Jan 26 22:29:02.591: INFO: stderr: "I0126 22:29:02.396559    4296 log.go:172] (0xc000990000) (0xc00063a5a0) Create stream\nI0126 22:29:02.396750    4296 log.go:172] (0xc000990000) (0xc00063a5a0) Stream added, broadcasting: 1\nI0126 22:29:02.399360    4296 log.go:172] (0xc000990000) Reply frame received for 1\nI0126 22:29:02.399475    4296 log.go:172] (0xc000990000) (0xc00093e000) Create stream\nI0126 22:29:02.399492    4296 log.go:172] (0xc000990000) (0xc00093e000) Stream added, broadcasting: 3\nI0126 22:29:02.400952    4296 log.go:172] (0xc000990000) Reply frame received for 3\nI0126 22:29:02.400978    4296 log.go:172] (0xc000990000) (0xc00093e0a0) Create stream\nI0126 22:29:02.400984    4296 log.go:172] (0xc000990000) (0xc00093e0a0) Stream added, broadcasting: 5\nI0126 22:29:02.402840    4296 log.go:172] (0xc000990000) Reply frame received for 5\nI0126 22:29:02.463186    4296 log.go:172] (0xc000990000) Data frame received for 5\nI0126 22:29:02.463259    4296 log.go:172] (0xc00093e0a0) (5) Data frame handling\nI0126 22:29:02.463278    4296 log.go:172] (0xc00093e0a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 30505\nI0126 22:29:02.467766    4296 log.go:172] (0xc000990000) Data frame received for 5\nI0126 22:29:02.467887    4296 log.go:172] (0xc00093e0a0) (5) Data frame handling\nI0126 22:29:02.467935    4296 log.go:172] (0xc00093e0a0) (5) Data frame sent\nConnection to 10.96.2.250 30505 port [tcp/30505] succeeded!\nI0126 22:29:02.578220    4296 log.go:172] (0xc000990000) Data frame received for 1\nI0126 22:29:02.578346    4296 log.go:172] (0xc000990000) (0xc00093e000) Stream removed, broadcasting: 3\nI0126 22:29:02.578468    4296 log.go:172] (0xc00063a5a0) (1) Data frame handling\nI0126 22:29:02.578479    4296 log.go:172] (0xc00063a5a0) (1) Data frame sent\nI0126 22:29:02.578486    4296 log.go:172] (0xc000990000) (0xc00063a5a0) Stream removed, broadcasting: 1\nI0126 22:29:02.579158    4296 log.go:172] (0xc000990000) (0xc00093e0a0) Stream removed, broadcasting: 5\nI0126 22:29:02.579504    4296 log.go:172] (0xc000990000) Go away received\nI0126 22:29:02.580071    4296 log.go:172] (0xc000990000) (0xc00063a5a0) Stream removed, broadcasting: 1\nI0126 22:29:02.580275    4296 log.go:172] (0xc000990000) (0xc00093e000) Stream removed, broadcasting: 3\nI0126 22:29:02.580353    4296 log.go:172] (0xc000990000) (0xc00093e0a0) Stream removed, broadcasting: 5\n"
Jan 26 22:29:02.591: INFO: stdout: ""
Jan 26 22:29:02.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9 execpod7vk9h -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 30505'
Jan 26 22:29:02.870: INFO: stderr: "I0126 22:29:02.702946    4318 log.go:172] (0xc00092f080) (0xc0008f66e0) Create stream\nI0126 22:29:02.703063    4318 log.go:172] (0xc00092f080) (0xc0008f66e0) Stream added, broadcasting: 1\nI0126 22:29:02.707504    4318 log.go:172] (0xc00092f080) Reply frame received for 1\nI0126 22:29:02.707547    4318 log.go:172] (0xc00092f080) (0xc0006485a0) Create stream\nI0126 22:29:02.707554    4318 log.go:172] (0xc00092f080) (0xc0006485a0) Stream added, broadcasting: 3\nI0126 22:29:02.708493    4318 log.go:172] (0xc00092f080) Reply frame received for 3\nI0126 22:29:02.708512    4318 log.go:172] (0xc00092f080) (0xc00046f360) Create stream\nI0126 22:29:02.708520    4318 log.go:172] (0xc00092f080) (0xc00046f360) Stream added, broadcasting: 5\nI0126 22:29:02.709616    4318 log.go:172] (0xc00092f080) Reply frame received for 5\nI0126 22:29:02.768530    4318 log.go:172] (0xc00092f080) Data frame received for 5\nI0126 22:29:02.768558    4318 log.go:172] (0xc00046f360) (5) Data frame handling\nI0126 22:29:02.768579    4318 log.go:172] (0xc00046f360) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.1.234 30505\nI0126 22:29:02.772195    4318 log.go:172] (0xc00092f080) Data frame received for 5\nI0126 22:29:02.772220    4318 log.go:172] (0xc00046f360) (5) Data frame handling\nI0126 22:29:02.772238    4318 log.go:172] (0xc00046f360) (5) Data frame sent\nConnection to 10.96.1.234 30505 port [tcp/30505] succeeded!\nI0126 22:29:02.861698    4318 log.go:172] (0xc00092f080) Data frame received for 1\nI0126 22:29:02.861846    4318 log.go:172] (0xc00092f080) (0xc0006485a0) Stream removed, broadcasting: 3\nI0126 22:29:02.861976    4318 log.go:172] (0xc0008f66e0) (1) Data frame handling\nI0126 22:29:02.861993    4318 log.go:172] (0xc0008f66e0) (1) Data frame sent\nI0126 22:29:02.862046    4318 log.go:172] (0xc00092f080) (0xc0008f66e0) Stream removed, broadcasting: 1\nI0126 22:29:02.862117    4318 log.go:172] (0xc00092f080) (0xc00046f360) Stream removed, broadcasting: 5\nI0126 22:29:02.862206    4318 log.go:172] (0xc00092f080) Go away received\nI0126 22:29:02.863078    4318 log.go:172] (0xc00092f080) (0xc0008f66e0) Stream removed, broadcasting: 1\nI0126 22:29:02.863119    4318 log.go:172] (0xc00092f080) (0xc0006485a0) Stream removed, broadcasting: 3\nI0126 22:29:02.863151    4318 log.go:172] (0xc00092f080) (0xc00046f360) Stream removed, broadcasting: 5\n"
Jan 26 22:29:02.870: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:29:02.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:24.909 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":209,"skipped":3447,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:29:02.883: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-projected-all-test-volume-59477462-f75f-4638-9671-15d5245f0ece
STEP: Creating secret with name secret-projected-all-test-volume-f06468f4-2ea6-4c77-abb2-68e9b064daef
STEP: Creating a pod to test Check all projections for projected volume plugin
Jan 26 22:29:03.091: INFO: Waiting up to 5m0s for pod "projected-volume-94544920-1f56-4e57-a3dd-f7449ddc9f9c" in namespace "projected-8431" to be "success or failure"
Jan 26 22:29:03.096: INFO: Pod "projected-volume-94544920-1f56-4e57-a3dd-f7449ddc9f9c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.750327ms
Jan 26 22:29:05.103: INFO: Pod "projected-volume-94544920-1f56-4e57-a3dd-f7449ddc9f9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011285001s
Jan 26 22:29:07.109: INFO: Pod "projected-volume-94544920-1f56-4e57-a3dd-f7449ddc9f9c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018106391s
Jan 26 22:29:09.115: INFO: Pod "projected-volume-94544920-1f56-4e57-a3dd-f7449ddc9f9c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.023185801s
Jan 26 22:29:12.471: INFO: Pod "projected-volume-94544920-1f56-4e57-a3dd-f7449ddc9f9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.379341799s
STEP: Saw pod success
Jan 26 22:29:12.471: INFO: Pod "projected-volume-94544920-1f56-4e57-a3dd-f7449ddc9f9c" satisfied condition "success or failure"
Jan 26 22:29:12.492: INFO: Trying to get logs from node jerma-node pod projected-volume-94544920-1f56-4e57-a3dd-f7449ddc9f9c container projected-all-volume-test: 
STEP: delete the pod
Jan 26 22:29:13.090: INFO: Waiting for pod projected-volume-94544920-1f56-4e57-a3dd-f7449ddc9f9c to disappear
Jan 26 22:29:13.229: INFO: Pod projected-volume-94544920-1f56-4e57-a3dd-f7449ddc9f9c no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:29:13.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8431" for this suite.

• [SLOW TEST:10.365 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":210,"skipped":3472,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:29:13.249: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-85e20f1e-6987-47b3-bdb5-a6942e1215ff
STEP: Creating configMap with name cm-test-opt-upd-bd31962e-baf5-4c06-ae15-0c8e5ddbcc74
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-85e20f1e-6987-47b3-bdb5-a6942e1215ff
STEP: Updating configmap cm-test-opt-upd-bd31962e-baf5-4c06-ae15-0c8e5ddbcc74
STEP: Creating configMap with name cm-test-opt-create-73a3e5da-db58-4828-b377-d4e4d9f67357
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:29:28.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9606" for this suite.

• [SLOW TEST:14.813 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":211,"skipped":3515,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:29:28.062: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-7feab195-4c36-43a1-ac0d-5ff4f89fe28f
STEP: Creating a pod to test consume configMaps
Jan 26 22:29:28.245: INFO: Waiting up to 5m0s for pod "pod-configmaps-6057b168-b293-44eb-924e-f7a5111f13ef" in namespace "configmap-1143" to be "success or failure"
Jan 26 22:29:28.252: INFO: Pod "pod-configmaps-6057b168-b293-44eb-924e-f7a5111f13ef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.47735ms
Jan 26 22:29:30.262: INFO: Pod "pod-configmaps-6057b168-b293-44eb-924e-f7a5111f13ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016449755s
Jan 26 22:29:32.310: INFO: Pod "pod-configmaps-6057b168-b293-44eb-924e-f7a5111f13ef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064731907s
Jan 26 22:29:34.320: INFO: Pod "pod-configmaps-6057b168-b293-44eb-924e-f7a5111f13ef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074239603s
Jan 26 22:29:36.333: INFO: Pod "pod-configmaps-6057b168-b293-44eb-924e-f7a5111f13ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.087633881s
STEP: Saw pod success
Jan 26 22:29:36.333: INFO: Pod "pod-configmaps-6057b168-b293-44eb-924e-f7a5111f13ef" satisfied condition "success or failure"
Jan 26 22:29:36.350: INFO: Trying to get logs from node jerma-node pod pod-configmaps-6057b168-b293-44eb-924e-f7a5111f13ef container configmap-volume-test: 
STEP: delete the pod
Jan 26 22:29:36.404: INFO: Waiting for pod pod-configmaps-6057b168-b293-44eb-924e-f7a5111f13ef to disappear
Jan 26 22:29:36.414: INFO: Pod pod-configmaps-6057b168-b293-44eb-924e-f7a5111f13ef no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:29:36.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1143" for this suite.

• [SLOW TEST:8.422 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":212,"skipped":3522,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:29:36.485: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 26 22:29:52.726: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 26 22:29:52.749: INFO: Pod pod-with-prestop-http-hook still exists
Jan 26 22:29:54.750: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 26 22:29:54.758: INFO: Pod pod-with-prestop-http-hook still exists
Jan 26 22:29:56.750: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 26 22:29:56.760: INFO: Pod pod-with-prestop-http-hook still exists
Jan 26 22:29:58.750: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 26 22:29:58.758: INFO: Pod pod-with-prestop-http-hook still exists
Jan 26 22:30:00.750: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 26 22:30:00.756: INFO: Pod pod-with-prestop-http-hook still exists
Jan 26 22:30:02.750: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 26 22:30:02.757: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:30:02.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-3652" for this suite.

• [SLOW TEST:26.315 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":213,"skipped":3526,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:30:02.801: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:30:10.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1262" for this suite.

• [SLOW TEST:8.172 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":214,"skipped":3545,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:30:10.974: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name projected-secret-test-4b870e02-736e-4161-a2d1-594fa7a86b34
STEP: Creating a pod to test consume secrets
Jan 26 22:30:11.122: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4b627f96-bfae-4b39-b4ec-d8c426305a8e" in namespace "projected-320" to be "success or failure"
Jan 26 22:30:11.201: INFO: Pod "pod-projected-secrets-4b627f96-bfae-4b39-b4ec-d8c426305a8e": Phase="Pending", Reason="", readiness=false. Elapsed: 79.347601ms
Jan 26 22:30:13.208: INFO: Pod "pod-projected-secrets-4b627f96-bfae-4b39-b4ec-d8c426305a8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086660994s
Jan 26 22:30:15.213: INFO: Pod "pod-projected-secrets-4b627f96-bfae-4b39-b4ec-d8c426305a8e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091586589s
Jan 26 22:30:17.221: INFO: Pod "pod-projected-secrets-4b627f96-bfae-4b39-b4ec-d8c426305a8e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.099604595s
Jan 26 22:30:19.230: INFO: Pod "pod-projected-secrets-4b627f96-bfae-4b39-b4ec-d8c426305a8e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.10848532s
STEP: Saw pod success
Jan 26 22:30:19.230: INFO: Pod "pod-projected-secrets-4b627f96-bfae-4b39-b4ec-d8c426305a8e" satisfied condition "success or failure"
Jan 26 22:30:19.235: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-4b627f96-bfae-4b39-b4ec-d8c426305a8e container secret-volume-test: 
STEP: delete the pod
Jan 26 22:30:19.319: INFO: Waiting for pod pod-projected-secrets-4b627f96-bfae-4b39-b4ec-d8c426305a8e to disappear
Jan 26 22:30:19.326: INFO: Pod pod-projected-secrets-4b627f96-bfae-4b39-b4ec-d8c426305a8e no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:30:19.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-320" for this suite.

• [SLOW TEST:8.361 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":215,"skipped":3545,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:30:19.336: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 26 22:30:20.310: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 26 22:30:22.338: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715674620, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715674620, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715674620, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715674620, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 22:30:24.353: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715674620, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715674620, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715674620, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715674620, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 22:30:26.346: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715674620, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715674620, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715674620, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715674620, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 26 22:30:29.446: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:30:41.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3642" for this suite.
STEP: Destroying namespace "webhook-3642-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:22.626 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":216,"skipped":3621,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:30:41.963: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-4b001308-db2f-415c-abc1-e9309c87c29f
STEP: Creating a pod to test consume secrets
Jan 26 22:30:42.047: INFO: Waiting up to 5m0s for pod "pod-secrets-0c2397ac-78e0-40c5-926c-ee632f35f2a5" in namespace "secrets-2747" to be "success or failure"
Jan 26 22:30:42.053: INFO: Pod "pod-secrets-0c2397ac-78e0-40c5-926c-ee632f35f2a5": Phase="Pending", Reason="", readiness=false. Elapsed: 5.296261ms
Jan 26 22:30:44.060: INFO: Pod "pod-secrets-0c2397ac-78e0-40c5-926c-ee632f35f2a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012938462s
Jan 26 22:30:46.065: INFO: Pod "pod-secrets-0c2397ac-78e0-40c5-926c-ee632f35f2a5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017434651s
Jan 26 22:30:48.072: INFO: Pod "pod-secrets-0c2397ac-78e0-40c5-926c-ee632f35f2a5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.024212999s
Jan 26 22:30:50.077: INFO: Pod "pod-secrets-0c2397ac-78e0-40c5-926c-ee632f35f2a5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.029696087s
Jan 26 22:30:52.087: INFO: Pod "pod-secrets-0c2397ac-78e0-40c5-926c-ee632f35f2a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.039568341s
STEP: Saw pod success
Jan 26 22:30:52.087: INFO: Pod "pod-secrets-0c2397ac-78e0-40c5-926c-ee632f35f2a5" satisfied condition "success or failure"
Jan 26 22:30:52.090: INFO: Trying to get logs from node jerma-node pod pod-secrets-0c2397ac-78e0-40c5-926c-ee632f35f2a5 container secret-volume-test: 
STEP: delete the pod
Jan 26 22:30:52.128: INFO: Waiting for pod pod-secrets-0c2397ac-78e0-40c5-926c-ee632f35f2a5 to disappear
Jan 26 22:30:52.134: INFO: Pod pod-secrets-0c2397ac-78e0-40c5-926c-ee632f35f2a5 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:30:52.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2747" for this suite.

• [SLOW TEST:10.183 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":217,"skipped":3645,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:30:52.147: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-712
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 26 22:30:52.302: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 26 22:31:22.436: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-712 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 26 22:31:22.436: INFO: >>> kubeConfig: /root/.kube/config
I0126 22:31:22.490076       8 log.go:172] (0xc00309c580) (0xc00223dea0) Create stream
I0126 22:31:22.490243       8 log.go:172] (0xc00309c580) (0xc00223dea0) Stream added, broadcasting: 1
I0126 22:31:22.497115       8 log.go:172] (0xc00309c580) Reply frame received for 1
I0126 22:31:22.497316       8 log.go:172] (0xc00309c580) (0xc001bb80a0) Create stream
I0126 22:31:22.497359       8 log.go:172] (0xc00309c580) (0xc001bb80a0) Stream added, broadcasting: 3
I0126 22:31:22.506725       8 log.go:172] (0xc00309c580) Reply frame received for 3
I0126 22:31:22.506849       8 log.go:172] (0xc00309c580) (0xc0027f5360) Create stream
I0126 22:31:22.506896       8 log.go:172] (0xc00309c580) (0xc0027f5360) Stream added, broadcasting: 5
I0126 22:31:22.513120       8 log.go:172] (0xc00309c580) Reply frame received for 5
I0126 22:31:22.635722       8 log.go:172] (0xc00309c580) Data frame received for 3
I0126 22:31:22.635864       8 log.go:172] (0xc001bb80a0) (3) Data frame handling
I0126 22:31:22.635887       8 log.go:172] (0xc001bb80a0) (3) Data frame sent
I0126 22:31:22.703150       8 log.go:172] (0xc00309c580) Data frame received for 1
I0126 22:31:22.703280       8 log.go:172] (0xc00223dea0) (1) Data frame handling
I0126 22:31:22.703318       8 log.go:172] (0xc00223dea0) (1) Data frame sent
I0126 22:31:22.703346       8 log.go:172] (0xc00309c580) (0xc00223dea0) Stream removed, broadcasting: 1
I0126 22:31:22.703659       8 log.go:172] (0xc00309c580) (0xc001bb80a0) Stream removed, broadcasting: 3
I0126 22:31:22.703830       8 log.go:172] (0xc00309c580) (0xc0027f5360) Stream removed, broadcasting: 5
I0126 22:31:22.703904       8 log.go:172] (0xc00309c580) (0xc00223dea0) Stream removed, broadcasting: 1
I0126 22:31:22.703914       8 log.go:172] (0xc00309c580) (0xc001bb80a0) Stream removed, broadcasting: 3
I0126 22:31:22.703926       8 log.go:172] (0xc00309c580) (0xc0027f5360) Stream removed, broadcasting: 5
Jan 26 22:31:22.704: INFO: Found all expected endpoints: [netserver-0]
Jan 26 22:31:22.728: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-712 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 26 22:31:22.728: INFO: >>> kubeConfig: /root/.kube/config
I0126 22:31:22.768467       8 log.go:172] (0xc00309cb00) (0xc0021fc320) Create stream
I0126 22:31:22.768527       8 log.go:172] (0xc00309cb00) (0xc0021fc320) Stream added, broadcasting: 1
I0126 22:31:22.770887       8 log.go:172] (0xc00309cb00) Reply frame received for 1
I0126 22:31:22.770916       8 log.go:172] (0xc00309cb00) (0xc0021fc5a0) Create stream
I0126 22:31:22.770924       8 log.go:172] (0xc00309cb00) (0xc0021fc5a0) Stream added, broadcasting: 3
I0126 22:31:22.771765       8 log.go:172] (0xc00309cb00) Reply frame received for 3
I0126 22:31:22.771785       8 log.go:172] (0xc00309cb00) (0xc0024270e0) Create stream
I0126 22:31:22.771795       8 log.go:172] (0xc00309cb00) (0xc0024270e0) Stream added, broadcasting: 5
I0126 22:31:22.772667       8 log.go:172] (0xc00309cb00) Reply frame received for 5
I0126 22:31:22.861803       8 log.go:172] (0xc00309cb00) Data frame received for 3
I0126 22:31:22.861878       8 log.go:172] (0xc0021fc5a0) (3) Data frame handling
I0126 22:31:22.861924       8 log.go:172] (0xc0021fc5a0) (3) Data frame sent
I0126 22:31:22.967253       8 log.go:172] (0xc00309cb00) (0xc0021fc5a0) Stream removed, broadcasting: 3
I0126 22:31:22.968104       8 log.go:172] (0xc00309cb00) Data frame received for 1
I0126 22:31:22.968402       8 log.go:172] (0xc00309cb00) (0xc0024270e0) Stream removed, broadcasting: 5
I0126 22:31:22.968648       8 log.go:172] (0xc0021fc320) (1) Data frame handling
I0126 22:31:22.968706       8 log.go:172] (0xc0021fc320) (1) Data frame sent
I0126 22:31:22.968867       8 log.go:172] (0xc00309cb00) (0xc0021fc320) Stream removed, broadcasting: 1
I0126 22:31:22.969838       8 log.go:172] (0xc00309cb00) Go away received
I0126 22:31:22.969930       8 log.go:172] (0xc00309cb00) (0xc0021fc320) Stream removed, broadcasting: 1
I0126 22:31:22.970042       8 log.go:172] (0xc00309cb00) (0xc0021fc5a0) Stream removed, broadcasting: 3
I0126 22:31:22.970085       8 log.go:172] (0xc00309cb00) (0xc0024270e0) Stream removed, broadcasting: 5
Jan 26 22:31:22.970: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:31:22.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-712" for this suite.

• [SLOW TEST:30.860 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":218,"skipped":3662,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:31:23.013: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:31:35.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1194" for this suite.

• [SLOW TEST:12.947 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":219,"skipped":3711,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:31:35.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Jan 26 22:31:42.723: INFO: Successfully updated pod "annotationupdate73316c17-ac7e-4fd2-87b9-b652fb6e8408"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:31:44.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4634" for this suite.

• [SLOW TEST:8.864 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":220,"skipped":3724,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:31:44.827: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name s-test-opt-del-c555d0dd-0d56-4faf-bbef-f8a7613fa570
STEP: Creating secret with name s-test-opt-upd-626ceddd-a090-4dde-812b-cd408b2fdd8e
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-c555d0dd-0d56-4faf-bbef-f8a7613fa570
STEP: Updating secret s-test-opt-upd-626ceddd-a090-4dde-812b-cd408b2fdd8e
STEP: Creating secret with name s-test-opt-create-c5bf858c-ac52-45d2-a75e-5d514c2fedf1
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:31:59.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4372" for this suite.

• [SLOW TEST:14.339 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":221,"skipped":3735,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:31:59.167: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 26 22:31:59.262: INFO: Creating ReplicaSet my-hostname-basic-34e71eb6-ec1d-4349-a362-344cb1cbd370
Jan 26 22:31:59.276: INFO: Pod name my-hostname-basic-34e71eb6-ec1d-4349-a362-344cb1cbd370: Found 0 pods out of 1
Jan 26 22:32:05.190: INFO: Pod name my-hostname-basic-34e71eb6-ec1d-4349-a362-344cb1cbd370: Found 1 pods out of 1
Jan 26 22:32:05.190: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-34e71eb6-ec1d-4349-a362-344cb1cbd370" is running
Jan 26 22:32:11.240: INFO: Pod "my-hostname-basic-34e71eb6-ec1d-4349-a362-344cb1cbd370-tzml4" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-26 22:31:59 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-26 22:31:59 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-34e71eb6-ec1d-4349-a362-344cb1cbd370]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-26 22:31:59 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-34e71eb6-ec1d-4349-a362-344cb1cbd370]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-26 22:31:59 +0000 UTC Reason: Message:}])
Jan 26 22:32:11.240: INFO: Trying to dial the pod
Jan 26 22:32:16.304: INFO: Controller my-hostname-basic-34e71eb6-ec1d-4349-a362-344cb1cbd370: Got expected result from replica 1 [my-hostname-basic-34e71eb6-ec1d-4349-a362-344cb1cbd370-tzml4]: "my-hostname-basic-34e71eb6-ec1d-4349-a362-344cb1cbd370-tzml4", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:32:16.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-2774" for this suite.

• [SLOW TEST:17.153 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":222,"skipped":3766,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:32:16.321: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-519a40c1-25ea-4f21-9117-be86cdef22eb
STEP: Creating a pod to test consume secrets
Jan 26 22:32:16.400: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e5cb2f85-375f-4e6f-9d7b-62bc95d2622a" in namespace "projected-6448" to be "success or failure"
Jan 26 22:32:16.476: INFO: Pod "pod-projected-secrets-e5cb2f85-375f-4e6f-9d7b-62bc95d2622a": Phase="Pending", Reason="", readiness=false. Elapsed: 76.103572ms
Jan 26 22:32:18.486: INFO: Pod "pod-projected-secrets-e5cb2f85-375f-4e6f-9d7b-62bc95d2622a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085637021s
Jan 26 22:32:20.497: INFO: Pod "pod-projected-secrets-e5cb2f85-375f-4e6f-9d7b-62bc95d2622a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096336468s
Jan 26 22:32:22.509: INFO: Pod "pod-projected-secrets-e5cb2f85-375f-4e6f-9d7b-62bc95d2622a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108205794s
Jan 26 22:32:24.526: INFO: Pod "pod-projected-secrets-e5cb2f85-375f-4e6f-9d7b-62bc95d2622a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.125588815s
STEP: Saw pod success
Jan 26 22:32:24.527: INFO: Pod "pod-projected-secrets-e5cb2f85-375f-4e6f-9d7b-62bc95d2622a" satisfied condition "success or failure"
Jan 26 22:32:24.534: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-e5cb2f85-375f-4e6f-9d7b-62bc95d2622a container projected-secret-volume-test: 
STEP: delete the pod
Jan 26 22:32:24.599: INFO: Waiting for pod pod-projected-secrets-e5cb2f85-375f-4e6f-9d7b-62bc95d2622a to disappear
Jan 26 22:32:24.603: INFO: Pod pod-projected-secrets-e5cb2f85-375f-4e6f-9d7b-62bc95d2622a no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:32:24.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6448" for this suite.

• [SLOW TEST:8.420 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":223,"skipped":3772,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:32:24.741: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-5da643c0-9fe7-4650-ab81-2560d4b40adf
STEP: Creating a pod to test consume configMaps
Jan 26 22:32:24.831: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-11a5bcc1-85a2-47f0-9852-f77a29415a0b" in namespace "projected-5961" to be "success or failure"
Jan 26 22:32:24.913: INFO: Pod "pod-projected-configmaps-11a5bcc1-85a2-47f0-9852-f77a29415a0b": Phase="Pending", Reason="", readiness=false. Elapsed: 81.94636ms
Jan 26 22:32:26.919: INFO: Pod "pod-projected-configmaps-11a5bcc1-85a2-47f0-9852-f77a29415a0b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088171897s
Jan 26 22:32:28.923: INFO: Pod "pod-projected-configmaps-11a5bcc1-85a2-47f0-9852-f77a29415a0b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092374179s
Jan 26 22:32:30.931: INFO: Pod "pod-projected-configmaps-11a5bcc1-85a2-47f0-9852-f77a29415a0b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.100705147s
Jan 26 22:32:32.938: INFO: Pod "pod-projected-configmaps-11a5bcc1-85a2-47f0-9852-f77a29415a0b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.107470158s
Jan 26 22:32:34.946: INFO: Pod "pod-projected-configmaps-11a5bcc1-85a2-47f0-9852-f77a29415a0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.11567179s
STEP: Saw pod success
Jan 26 22:32:34.946: INFO: Pod "pod-projected-configmaps-11a5bcc1-85a2-47f0-9852-f77a29415a0b" satisfied condition "success or failure"
Jan 26 22:32:34.952: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-11a5bcc1-85a2-47f0-9852-f77a29415a0b container projected-configmap-volume-test: 
STEP: delete the pod
Jan 26 22:32:35.004: INFO: Waiting for pod pod-projected-configmaps-11a5bcc1-85a2-47f0-9852-f77a29415a0b to disappear
Jan 26 22:32:35.011: INFO: Pod pod-projected-configmaps-11a5bcc1-85a2-47f0-9852-f77a29415a0b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:32:35.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5961" for this suite.

• [SLOW TEST:10.280 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":224,"skipped":3790,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:32:35.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:33:35.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3231" for this suite.

• [SLOW TEST:60.204 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":225,"skipped":3799,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:33:35.228: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:33:46.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1327" for this suite.

• [SLOW TEST:11.396 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":226,"skipped":3819,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:33:46.624: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-6132
[It] should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating statefulset ss in namespace statefulset-6132
Jan 26 22:33:46.810: INFO: Found 0 stateful pods, waiting for 1
Jan 26 22:33:56.815: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jan 26 22:33:56.849: INFO: Deleting all statefulset in ns statefulset-6132
Jan 26 22:33:56.903: INFO: Scaling statefulset ss to 0
Jan 26 22:34:17.034: INFO: Waiting for statefulset status.replicas updated to 0
Jan 26 22:34:17.048: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:34:17.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6132" for this suite.

• [SLOW TEST:30.496 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should have a working scale subresource [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":227,"skipped":3831,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:34:17.121: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service endpoint-test2 in namespace services-307
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-307 to expose endpoints map[]
Jan 26 22:34:17.251: INFO: Get endpoints failed (49.481789ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jan 26 22:34:18.260: INFO: successfully validated that service endpoint-test2 in namespace services-307 exposes endpoints map[] (1.058091966s elapsed)
STEP: Creating pod pod1 in namespace services-307
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-307 to expose endpoints map[pod1:[80]]
Jan 26 22:34:22.564: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.292539969s elapsed, will retry)
Jan 26 22:34:24.621: INFO: successfully validated that service endpoint-test2 in namespace services-307 exposes endpoints map[pod1:[80]] (6.349120062s elapsed)
STEP: Creating pod pod2 in namespace services-307
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-307 to expose endpoints map[pod1:[80] pod2:[80]]
Jan 26 22:34:29.260: INFO: Unexpected endpoints: found map[70db6739-513a-4011-a78d-36b8f31635e9:[80]], expected map[pod1:[80] pod2:[80]] (4.633013184s elapsed, will retry)
Jan 26 22:34:31.392: INFO: successfully validated that service endpoint-test2 in namespace services-307 exposes endpoints map[pod1:[80] pod2:[80]] (6.764867248s elapsed)
STEP: Deleting pod pod1 in namespace services-307
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-307 to expose endpoints map[pod2:[80]]
Jan 26 22:34:32.495: INFO: successfully validated that service endpoint-test2 in namespace services-307 exposes endpoints map[pod2:[80]] (1.092889148s elapsed)
STEP: Deleting pod pod2 in namespace services-307
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-307 to expose endpoints map[]
Jan 26 22:34:34.940: INFO: successfully validated that service endpoint-test2 in namespace services-307 exposes endpoints map[] (2.435121698s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:34:35.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-307" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:18.267 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":278,"completed":228,"skipped":3831,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:34:35.389: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 26 22:34:35.539: INFO: Waiting up to 5m0s for pod "pod-bc5517e7-856c-4653-84f8-b9e2a932087c" in namespace "emptydir-8756" to be "success or failure"
Jan 26 22:34:35.567: INFO: Pod "pod-bc5517e7-856c-4653-84f8-b9e2a932087c": Phase="Pending", Reason="", readiness=false. Elapsed: 27.768139ms
Jan 26 22:34:38.189: INFO: Pod "pod-bc5517e7-856c-4653-84f8-b9e2a932087c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.649698396s
Jan 26 22:34:40.195: INFO: Pod "pod-bc5517e7-856c-4653-84f8-b9e2a932087c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.655874148s
Jan 26 22:34:42.201: INFO: Pod "pod-bc5517e7-856c-4653-84f8-b9e2a932087c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.66164797s
Jan 26 22:34:44.206: INFO: Pod "pod-bc5517e7-856c-4653-84f8-b9e2a932087c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.667142496s
STEP: Saw pod success
Jan 26 22:34:44.206: INFO: Pod "pod-bc5517e7-856c-4653-84f8-b9e2a932087c" satisfied condition "success or failure"
Jan 26 22:34:44.210: INFO: Trying to get logs from node jerma-node pod pod-bc5517e7-856c-4653-84f8-b9e2a932087c container test-container: 
STEP: delete the pod
Jan 26 22:34:44.291: INFO: Waiting for pod pod-bc5517e7-856c-4653-84f8-b9e2a932087c to disappear
Jan 26 22:34:44.299: INFO: Pod pod-bc5517e7-856c-4653-84f8-b9e2a932087c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:34:44.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8756" for this suite.

• [SLOW TEST:8.932 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":229,"skipped":3840,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:34:44.322: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 26 22:34:44.388: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jan 26 22:34:49.399: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 26 22:34:51.419: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Jan 26 22:34:51.452: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-3124 /apis/apps/v1/namespaces/deployment-3124/deployments/test-cleanup-deployment 179cdbe7-424c-4bde-8feb-1ccdfaa9c7af 4555419 1 2020-01-26 22:34:51 +0000 UTC   map[name:cleanup-pod] map[] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000af4288  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},}

Jan 26 22:34:51.458: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
Jan 26 22:34:51.458: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Jan 26 22:34:51.458: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller  deployment-3124 /apis/apps/v1/namespaces/deployment-3124/replicasets/test-cleanup-controller 40eca8d1-ef2b-4253-b6c1-f6fcf3399a40 4555420 1 2020-01-26 22:34:44 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 179cdbe7-424c-4bde-8feb-1ccdfaa9c7af 0xc003532bd7 0xc003532bd8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003532c38  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Jan 26 22:34:51.501: INFO: Pod "test-cleanup-controller-hkgv9" is available:
&Pod{ObjectMeta:{test-cleanup-controller-hkgv9 test-cleanup-controller- deployment-3124 /api/v1/namespaces/deployment-3124/pods/test-cleanup-controller-hkgv9 2ca51a2c-d127-43d6-b9a1-40f895682723 4555415 0 2020-01-26 22:34:44 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 40eca8d1-ef2b-4253-b6c1-f6fcf3399a40 0xc003533047 0xc003533048}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8zmvd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8zmvd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8zmvd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:34:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:34:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:34:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-26 22:34:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-01-26 22:34:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-26 22:34:50 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://03c1948dfc424cb9826465977bf77008d3d63dcbae4efdc18a95efbd3d7bc509,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:34:51.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3124" for this suite.

• [SLOW TEST:7.309 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":230,"skipped":3846,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:34:51.632: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating api versions
Jan 26 22:34:51.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jan 26 22:34:52.035: INFO: stderr: ""
Jan 26 22:34:52.035: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:34:52.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7540" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":278,"completed":231,"skipped":3862,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:34:52.053: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 26 22:34:52.772: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 26 22:34:54.797: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715674892, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715674892, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715674892, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715674892, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 22:34:56.803: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715674892, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715674892, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715674892, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715674892, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 22:34:58.827: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715674892, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715674892, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715674892, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715674892, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 22:35:00.805: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715674892, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715674892, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715674892, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715674892, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 22:35:02.806: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715674892, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715674892, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715674892, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715674892, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 26 22:35:05.993: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
Jan 26 22:35:06.033: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:35:06.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8865" for this suite.
STEP: Destroying namespace "webhook-8865-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:14.210 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":232,"skipped":3924,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:35:06.264: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Jan 26 22:35:06.319: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:35:22.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4491" for this suite.

• [SLOW TEST:15.970 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":233,"skipped":3926,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:35:22.234: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 26 22:35:22.347: INFO: Waiting up to 5m0s for pod "downwardapi-volume-42845c7c-55bf-4012-8353-a9bfc5bd4818" in namespace "projected-8999" to be "success or failure"
Jan 26 22:35:22.368: INFO: Pod "downwardapi-volume-42845c7c-55bf-4012-8353-a9bfc5bd4818": Phase="Pending", Reason="", readiness=false. Elapsed: 21.780886ms
Jan 26 22:35:24.377: INFO: Pod "downwardapi-volume-42845c7c-55bf-4012-8353-a9bfc5bd4818": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030453532s
Jan 26 22:35:26.394: INFO: Pod "downwardapi-volume-42845c7c-55bf-4012-8353-a9bfc5bd4818": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047215167s
Jan 26 22:35:28.402: INFO: Pod "downwardapi-volume-42845c7c-55bf-4012-8353-a9bfc5bd4818": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055035659s
Jan 26 22:35:30.450: INFO: Pod "downwardapi-volume-42845c7c-55bf-4012-8353-a9bfc5bd4818": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.103162791s
STEP: Saw pod success
Jan 26 22:35:30.450: INFO: Pod "downwardapi-volume-42845c7c-55bf-4012-8353-a9bfc5bd4818" satisfied condition "success or failure"
Jan 26 22:35:30.456: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-42845c7c-55bf-4012-8353-a9bfc5bd4818 container client-container: 
STEP: delete the pod
Jan 26 22:35:30.507: INFO: Waiting for pod downwardapi-volume-42845c7c-55bf-4012-8353-a9bfc5bd4818 to disappear
Jan 26 22:35:30.514: INFO: Pod downwardapi-volume-42845c7c-55bf-4012-8353-a9bfc5bd4818 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:35:30.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8999" for this suite.

• [SLOW TEST:8.292 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":234,"skipped":3930,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:35:30.528: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-8072
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Jan 26 22:35:30.802: INFO: Found 0 stateful pods, waiting for 3
Jan 26 22:35:41.031: INFO: Found 2 stateful pods, waiting for 3
Jan 26 22:35:50.819: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 26 22:35:50.819: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 26 22:35:50.819: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 26 22:36:00.808: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 26 22:36:00.808: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 26 22:36:00.808: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jan 26 22:36:00.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8072 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 26 22:36:01.232: INFO: stderr: "I0126 22:36:01.001920    4355 log.go:172] (0xc000aa7600) (0xc000a20820) Create stream\nI0126 22:36:01.002059    4355 log.go:172] (0xc000aa7600) (0xc000a20820) Stream added, broadcasting: 1\nI0126 22:36:01.014215    4355 log.go:172] (0xc000aa7600) Reply frame received for 1\nI0126 22:36:01.014276    4355 log.go:172] (0xc000aa7600) (0xc000a20000) Create stream\nI0126 22:36:01.014289    4355 log.go:172] (0xc000aa7600) (0xc000a20000) Stream added, broadcasting: 3\nI0126 22:36:01.015467    4355 log.go:172] (0xc000aa7600) Reply frame received for 3\nI0126 22:36:01.015542    4355 log.go:172] (0xc000aa7600) (0xc00063a640) Create stream\nI0126 22:36:01.015551    4355 log.go:172] (0xc000aa7600) (0xc00063a640) Stream added, broadcasting: 5\nI0126 22:36:01.016835    4355 log.go:172] (0xc000aa7600) Reply frame received for 5\nI0126 22:36:01.090728    4355 log.go:172] (0xc000aa7600) Data frame received for 5\nI0126 22:36:01.090857    4355 log.go:172] (0xc00063a640) (5) Data frame handling\nI0126 22:36:01.090886    4355 log.go:172] (0xc00063a640) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0126 22:36:01.137721    4355 log.go:172] (0xc000aa7600) Data frame received for 3\nI0126 22:36:01.137790    4355 log.go:172] (0xc000a20000) (3) Data frame handling\nI0126 22:36:01.137806    4355 log.go:172] (0xc000a20000) (3) Data frame sent\nI0126 22:36:01.222857    4355 log.go:172] (0xc000aa7600) (0xc000a20000) Stream removed, broadcasting: 3\nI0126 22:36:01.222972    4355 log.go:172] (0xc000aa7600) (0xc00063a640) Stream removed, broadcasting: 5\nI0126 22:36:01.223016    4355 log.go:172] (0xc000aa7600) Data frame received for 1\nI0126 22:36:01.223062    4355 log.go:172] (0xc000a20820) (1) Data frame handling\nI0126 22:36:01.223110    4355 log.go:172] (0xc000a20820) (1) Data frame sent\nI0126 22:36:01.223177    4355 log.go:172] (0xc000aa7600) (0xc000a20820) Stream removed, broadcasting: 1\nI0126 22:36:01.223216    4355 log.go:172] (0xc000aa7600) Go away received\nI0126 22:36:01.223772    4355 log.go:172] (0xc000aa7600) (0xc000a20820) Stream removed, broadcasting: 1\nI0126 22:36:01.223786    4355 log.go:172] (0xc000aa7600) (0xc000a20000) Stream removed, broadcasting: 3\nI0126 22:36:01.223793    4355 log.go:172] (0xc000aa7600) (0xc00063a640) Stream removed, broadcasting: 5\n"
Jan 26 22:36:01.233: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 26 22:36:01.233: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Jan 26 22:36:01.317: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jan 26 22:36:11.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8072 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 26 22:36:11.881: INFO: stderr: "I0126 22:36:11.600290    4375 log.go:172] (0xc000bb4e70) (0xc000c68500) Create stream\nI0126 22:36:11.600498    4375 log.go:172] (0xc000bb4e70) (0xc000c68500) Stream added, broadcasting: 1\nI0126 22:36:11.604651    4375 log.go:172] (0xc000bb4e70) Reply frame received for 1\nI0126 22:36:11.604697    4375 log.go:172] (0xc000bb4e70) (0xc000b64320) Create stream\nI0126 22:36:11.604707    4375 log.go:172] (0xc000bb4e70) (0xc000b64320) Stream added, broadcasting: 3\nI0126 22:36:11.606456    4375 log.go:172] (0xc000bb4e70) Reply frame received for 3\nI0126 22:36:11.606485    4375 log.go:172] (0xc000bb4e70) (0xc000a0a140) Create stream\nI0126 22:36:11.606497    4375 log.go:172] (0xc000bb4e70) (0xc000a0a140) Stream added, broadcasting: 5\nI0126 22:36:11.608989    4375 log.go:172] (0xc000bb4e70) Reply frame received for 5\nI0126 22:36:11.732770    4375 log.go:172] (0xc000bb4e70) Data frame received for 3\nI0126 22:36:11.733486    4375 log.go:172] (0xc000b64320) (3) Data frame handling\nI0126 22:36:11.733607    4375 log.go:172] (0xc000b64320) (3) Data frame sent\nI0126 22:36:11.733779    4375 log.go:172] (0xc000bb4e70) Data frame received for 5\nI0126 22:36:11.733858    4375 log.go:172] (0xc000a0a140) (5) Data frame handling\nI0126 22:36:11.733887    4375 log.go:172] (0xc000a0a140) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0126 22:36:11.861179    4375 log.go:172] (0xc000bb4e70) Data frame received for 1\nI0126 22:36:11.861395    4375 log.go:172] (0xc000bb4e70) (0xc000b64320) Stream removed, broadcasting: 3\nI0126 22:36:11.861518    4375 log.go:172] (0xc000c68500) (1) Data frame handling\nI0126 22:36:11.861546    4375 log.go:172] (0xc000c68500) (1) Data frame sent\nI0126 22:36:11.861610    4375 log.go:172] (0xc000bb4e70) (0xc000a0a140) Stream removed, broadcasting: 5\nI0126 22:36:11.861685    4375 log.go:172] (0xc000bb4e70) (0xc000c68500) Stream removed, broadcasting: 1\nI0126 22:36:11.861717    4375 log.go:172] (0xc000bb4e70) Go away received\nI0126 22:36:11.864090    4375 log.go:172] (0xc000bb4e70) (0xc000c68500) Stream removed, broadcasting: 1\nI0126 22:36:11.864327    4375 log.go:172] (0xc000bb4e70) (0xc000b64320) Stream removed, broadcasting: 3\nI0126 22:36:11.864352    4375 log.go:172] (0xc000bb4e70) (0xc000a0a140) Stream removed, broadcasting: 5\n"
Jan 26 22:36:11.881: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 26 22:36:11.881: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 26 22:36:21.926: INFO: Waiting for StatefulSet statefulset-8072/ss2 to complete update
Jan 26 22:36:21.926: INFO: Waiting for Pod statefulset-8072/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 26 22:36:21.927: INFO: Waiting for Pod statefulset-8072/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 26 22:36:21.927: INFO: Waiting for Pod statefulset-8072/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 26 22:36:31.935: INFO: Waiting for StatefulSet statefulset-8072/ss2 to complete update
Jan 26 22:36:31.935: INFO: Waiting for Pod statefulset-8072/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 26 22:36:31.935: INFO: Waiting for Pod statefulset-8072/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 26 22:36:41.940: INFO: Waiting for StatefulSet statefulset-8072/ss2 to complete update
Jan 26 22:36:41.940: INFO: Waiting for Pod statefulset-8072/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 26 22:36:51.937: INFO: Waiting for StatefulSet statefulset-8072/ss2 to complete update
STEP: Rolling back to a previous revision
Jan 26 22:37:01.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8072 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 26 22:37:02.422: INFO: stderr: "I0126 22:37:02.155315    4395 log.go:172] (0xc000b5a000) (0xc000519360) Create stream\nI0126 22:37:02.155596    4395 log.go:172] (0xc000b5a000) (0xc000519360) Stream added, broadcasting: 1\nI0126 22:37:02.161001    4395 log.go:172] (0xc000b5a000) Reply frame received for 1\nI0126 22:37:02.161138    4395 log.go:172] (0xc000b5a000) (0xc000990000) Create stream\nI0126 22:37:02.161160    4395 log.go:172] (0xc000b5a000) (0xc000990000) Stream added, broadcasting: 3\nI0126 22:37:02.162252    4395 log.go:172] (0xc000b5a000) Reply frame received for 3\nI0126 22:37:02.162288    4395 log.go:172] (0xc000b5a000) (0xc0006179a0) Create stream\nI0126 22:37:02.162302    4395 log.go:172] (0xc000b5a000) (0xc0006179a0) Stream added, broadcasting: 5\nI0126 22:37:02.163307    4395 log.go:172] (0xc000b5a000) Reply frame received for 5\nI0126 22:37:02.252483    4395 log.go:172] (0xc000b5a000) Data frame received for 5\nI0126 22:37:02.252620    4395 log.go:172] (0xc0006179a0) (5) Data frame handling\nI0126 22:37:02.252709    4395 log.go:172] (0xc0006179a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0126 22:37:02.315543    4395 log.go:172] (0xc000b5a000) Data frame received for 3\nI0126 22:37:02.315743    4395 log.go:172] (0xc000990000) (3) Data frame handling\nI0126 22:37:02.315778    4395 log.go:172] (0xc000990000) (3) Data frame sent\nI0126 22:37:02.406574    4395 log.go:172] (0xc000b5a000) Data frame received for 1\nI0126 22:37:02.406835    4395 log.go:172] (0xc000519360) (1) Data frame handling\nI0126 22:37:02.406868    4395 log.go:172] (0xc000519360) (1) Data frame sent\nI0126 22:37:02.407716    4395 log.go:172] (0xc000b5a000) (0xc000519360) Stream removed, broadcasting: 1\nI0126 22:37:02.408348    4395 log.go:172] (0xc000b5a000) (0xc0006179a0) Stream removed, broadcasting: 5\nI0126 22:37:02.408684    4395 log.go:172] (0xc000b5a000) (0xc000990000) Stream removed, broadcasting: 3\nI0126 22:37:02.408747    4395 log.go:172] (0xc000b5a000) Go away received\nI0126 22:37:02.409535    4395 log.go:172] (0xc000b5a000) (0xc000519360) Stream removed, broadcasting: 1\nI0126 22:37:02.409587    4395 log.go:172] (0xc000b5a000) (0xc000990000) Stream removed, broadcasting: 3\nI0126 22:37:02.409599    4395 log.go:172] (0xc000b5a000) (0xc0006179a0) Stream removed, broadcasting: 5\n"
Jan 26 22:37:02.422: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 26 22:37:02.422: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 26 22:37:12.458: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jan 26 22:37:22.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8072 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 26 22:37:22.976: INFO: stderr: "I0126 22:37:22.717210    4415 log.go:172] (0xc0009aa000) (0xc000a26000) Create stream\nI0126 22:37:22.717486    4415 log.go:172] (0xc0009aa000) (0xc000a26000) Stream added, broadcasting: 1\nI0126 22:37:22.720557    4415 log.go:172] (0xc0009aa000) Reply frame received for 1\nI0126 22:37:22.720585    4415 log.go:172] (0xc0009aa000) (0xc000a260a0) Create stream\nI0126 22:37:22.720590    4415 log.go:172] (0xc0009aa000) (0xc000a260a0) Stream added, broadcasting: 3\nI0126 22:37:22.723275    4415 log.go:172] (0xc0009aa000) Reply frame received for 3\nI0126 22:37:22.723308    4415 log.go:172] (0xc0009aa000) (0xc000a26140) Create stream\nI0126 22:37:22.723317    4415 log.go:172] (0xc0009aa000) (0xc000a26140) Stream added, broadcasting: 5\nI0126 22:37:22.725027    4415 log.go:172] (0xc0009aa000) Reply frame received for 5\nI0126 22:37:22.850131    4415 log.go:172] (0xc0009aa000) Data frame received for 5\nI0126 22:37:22.850361    4415 log.go:172] (0xc000a26140) (5) Data frame handling\nI0126 22:37:22.850382    4415 log.go:172] (0xc000a26140) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0126 22:37:22.850591    4415 log.go:172] (0xc0009aa000) Data frame received for 3\nI0126 22:37:22.850619    4415 log.go:172] (0xc000a260a0) (3) Data frame handling\nI0126 22:37:22.850634    4415 log.go:172] (0xc000a260a0) (3) Data frame sent\nI0126 22:37:22.964332    4415 log.go:172] (0xc0009aa000) (0xc000a26140) Stream removed, broadcasting: 5\nI0126 22:37:22.964458    4415 log.go:172] (0xc0009aa000) Data frame received for 1\nI0126 22:37:22.964486    4415 log.go:172] (0xc000a26000) (1) Data frame handling\nI0126 22:37:22.964497    4415 log.go:172] (0xc000a26000) (1) Data frame sent\nI0126 22:37:22.964503    4415 log.go:172] (0xc0009aa000) (0xc000a260a0) Stream removed, broadcasting: 3\nI0126 22:37:22.964535    4415 log.go:172] (0xc0009aa000) (0xc000a26000) Stream removed, broadcasting: 1\nI0126 22:37:22.964550    4415 log.go:172] (0xc0009aa000) Go away received\nI0126 22:37:22.965419    4415 log.go:172] (0xc0009aa000) (0xc000a26000) Stream removed, broadcasting: 1\nI0126 22:37:22.965433    4415 log.go:172] (0xc0009aa000) (0xc000a260a0) Stream removed, broadcasting: 3\nI0126 22:37:22.965439    4415 log.go:172] (0xc0009aa000) (0xc000a26140) Stream removed, broadcasting: 5\n"
Jan 26 22:37:22.976: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 26 22:37:22.976: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 26 22:37:33.009: INFO: Waiting for StatefulSet statefulset-8072/ss2 to complete update
Jan 26 22:37:33.009: INFO: Waiting for Pod statefulset-8072/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 26 22:37:33.009: INFO: Waiting for Pod statefulset-8072/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 26 22:37:43.017: INFO: Waiting for StatefulSet statefulset-8072/ss2 to complete update
Jan 26 22:37:43.017: INFO: Waiting for Pod statefulset-8072/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 26 22:37:43.017: INFO: Waiting for Pod statefulset-8072/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 26 22:37:53.068: INFO: Waiting for StatefulSet statefulset-8072/ss2 to complete update
Jan 26 22:37:53.069: INFO: Waiting for Pod statefulset-8072/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 26 22:38:03.016: INFO: Waiting for StatefulSet statefulset-8072/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jan 26 22:38:13.024: INFO: Deleting all statefulset in ns statefulset-8072
Jan 26 22:38:13.030: INFO: Scaling statefulset ss2 to 0
Jan 26 22:38:43.096: INFO: Waiting for statefulset status.replicas updated to 0
Jan 26 22:38:43.102: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:38:43.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8072" for this suite.

• [SLOW TEST:192.625 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":235,"skipped":3960,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:38:43.154: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jan 26 22:38:43.274: INFO: Pod name pod-release: Found 0 pods out of 1
Jan 26 22:38:48.287: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:38:48.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5972" for this suite.

• [SLOW TEST:5.300 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":236,"skipped":3978,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:38:48.456: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:39:02.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-850" for this suite.

• [SLOW TEST:14.334 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":237,"skipped":3986,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:39:02.790: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-d1dd219c-e0f6-45b7-8ae5-bba166091ea2 in namespace container-probe-7981
Jan 26 22:39:11.019: INFO: Started pod busybox-d1dd219c-e0f6-45b7-8ae5-bba166091ea2 in namespace container-probe-7981
STEP: checking the pod's current state and verifying that restartCount is present
Jan 26 22:39:11.021: INFO: Initial restart count of pod busybox-d1dd219c-e0f6-45b7-8ae5-bba166091ea2 is 0
Jan 26 22:40:03.217: INFO: Restart count of pod container-probe-7981/busybox-d1dd219c-e0f6-45b7-8ae5-bba166091ea2 is now 1 (52.195641372s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:40:03.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7981" for this suite.

• [SLOW TEST:60.472 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":238,"skipped":3989,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:40:03.264: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0126 22:40:14.661644       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 26 22:40:14.661: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:40:14.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-803" for this suite.

• [SLOW TEST:11.418 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":239,"skipped":4041,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:40:14.683: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 26 22:40:18.366: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-91549559-5ea4-4738-b5c6-e90d73b2b3d8" in namespace "security-context-test-4526" to be "success or failure"
Jan 26 22:40:18.902: INFO: Pod "busybox-readonly-false-91549559-5ea4-4738-b5c6-e90d73b2b3d8": Phase="Pending", Reason="", readiness=false. Elapsed: 535.920283ms
Jan 26 22:40:21.018: INFO: Pod "busybox-readonly-false-91549559-5ea4-4738-b5c6-e90d73b2b3d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.652330491s
Jan 26 22:40:23.307: INFO: Pod "busybox-readonly-false-91549559-5ea4-4738-b5c6-e90d73b2b3d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.941433836s
Jan 26 22:40:25.314: INFO: Pod "busybox-readonly-false-91549559-5ea4-4738-b5c6-e90d73b2b3d8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.948554698s
Jan 26 22:40:27.319: INFO: Pod "busybox-readonly-false-91549559-5ea4-4738-b5c6-e90d73b2b3d8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.953514908s
Jan 26 22:40:29.324: INFO: Pod "busybox-readonly-false-91549559-5ea4-4738-b5c6-e90d73b2b3d8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.958325018s
Jan 26 22:40:31.331: INFO: Pod "busybox-readonly-false-91549559-5ea4-4738-b5c6-e90d73b2b3d8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.965432931s
Jan 26 22:40:33.339: INFO: Pod "busybox-readonly-false-91549559-5ea4-4738-b5c6-e90d73b2b3d8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.973642742s
Jan 26 22:40:35.347: INFO: Pod "busybox-readonly-false-91549559-5ea4-4738-b5c6-e90d73b2b3d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.981268049s
Jan 26 22:40:35.347: INFO: Pod "busybox-readonly-false-91549559-5ea4-4738-b5c6-e90d73b2b3d8" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:40:35.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-4526" for this suite.

• [SLOW TEST:20.682 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a pod with readOnlyRootFilesystem
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:164
    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":240,"skipped":4045,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:40:35.367: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Jan 26 22:40:35.504: INFO: Created pod &Pod{ObjectMeta:{dns-775  dns-775 /api/v1/namespaces/dns-775/pods/dns-775 fc2f17bf-d8c3-4f26-930c-3a7e6bd0bd03 4556957 0 2020-01-26 22:40:35 +0000 UTC   map[] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xlkk2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xlkk2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xlkk2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
STEP: Verifying customized DNS suffix list is configured on pod...
Jan 26 22:40:41.518: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-775 PodName:dns-775 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 26 22:40:41.518: INFO: >>> kubeConfig: /root/.kube/config
I0126 22:40:41.634907       8 log.go:172] (0xc00164c2c0) (0xc001aa4fa0) Create stream
I0126 22:40:41.635118       8 log.go:172] (0xc00164c2c0) (0xc001aa4fa0) Stream added, broadcasting: 1
I0126 22:40:41.643439       8 log.go:172] (0xc00164c2c0) Reply frame received for 1
I0126 22:40:41.643519       8 log.go:172] (0xc00164c2c0) (0xc0024268c0) Create stream
I0126 22:40:41.643540       8 log.go:172] (0xc00164c2c0) (0xc0024268c0) Stream added, broadcasting: 3
I0126 22:40:41.646818       8 log.go:172] (0xc00164c2c0) Reply frame received for 3
I0126 22:40:41.646879       8 log.go:172] (0xc00164c2c0) (0xc002426960) Create stream
I0126 22:40:41.646899       8 log.go:172] (0xc00164c2c0) (0xc002426960) Stream added, broadcasting: 5
I0126 22:40:41.649036       8 log.go:172] (0xc00164c2c0) Reply frame received for 5
I0126 22:40:41.778884       8 log.go:172] (0xc00164c2c0) Data frame received for 3
I0126 22:40:41.779144       8 log.go:172] (0xc0024268c0) (3) Data frame handling
I0126 22:40:41.779237       8 log.go:172] (0xc0024268c0) (3) Data frame sent
I0126 22:40:41.874372       8 log.go:172] (0xc00164c2c0) (0xc0024268c0) Stream removed, broadcasting: 3
I0126 22:40:41.874613       8 log.go:172] (0xc00164c2c0) Data frame received for 1
I0126 22:40:41.874659       8 log.go:172] (0xc001aa4fa0) (1) Data frame handling
I0126 22:40:41.874709       8 log.go:172] (0xc001aa4fa0) (1) Data frame sent
I0126 22:40:41.874721       8 log.go:172] (0xc00164c2c0) (0xc001aa4fa0) Stream removed, broadcasting: 1
I0126 22:40:41.874965       8 log.go:172] (0xc00164c2c0) (0xc002426960) Stream removed, broadcasting: 5
I0126 22:40:41.875005       8 log.go:172] (0xc00164c2c0) (0xc001aa4fa0) Stream removed, broadcasting: 1
I0126 22:40:41.875115       8 log.go:172] (0xc00164c2c0) (0xc0024268c0) Stream removed, broadcasting: 3
I0126 22:40:41.875124       8 log.go:172] (0xc00164c2c0) (0xc002426960) Stream removed, broadcasting: 5
STEP: Verifying customized DNS server is configured on pod...
Jan 26 22:40:41.875: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-775 PodName:dns-775 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 26 22:40:41.875: INFO: >>> kubeConfig: /root/.kube/config
I0126 22:40:41.875705       8 log.go:172] (0xc00164c2c0) Go away received
I0126 22:40:41.926380       8 log.go:172] (0xc00309c420) (0xc0021fc320) Create stream
I0126 22:40:41.926469       8 log.go:172] (0xc00309c420) (0xc0021fc320) Stream added, broadcasting: 1
I0126 22:40:41.931399       8 log.go:172] (0xc00309c420) Reply frame received for 1
I0126 22:40:41.931491       8 log.go:172] (0xc00309c420) (0xc001aa57c0) Create stream
I0126 22:40:41.931511       8 log.go:172] (0xc00309c420) (0xc001aa57c0) Stream added, broadcasting: 3
I0126 22:40:41.932645       8 log.go:172] (0xc00309c420) Reply frame received for 3
I0126 22:40:41.932678       8 log.go:172] (0xc00309c420) (0xc001bb9a40) Create stream
I0126 22:40:41.932690       8 log.go:172] (0xc00309c420) (0xc001bb9a40) Stream added, broadcasting: 5
I0126 22:40:41.933688       8 log.go:172] (0xc00309c420) Reply frame received for 5
I0126 22:40:42.047654       8 log.go:172] (0xc00309c420) Data frame received for 3
I0126 22:40:42.047770       8 log.go:172] (0xc001aa57c0) (3) Data frame handling
I0126 22:40:42.047807       8 log.go:172] (0xc001aa57c0) (3) Data frame sent
I0126 22:40:42.151182       8 log.go:172] (0xc00309c420) (0xc001aa57c0) Stream removed, broadcasting: 3
I0126 22:40:42.151664       8 log.go:172] (0xc00309c420) Data frame received for 1
I0126 22:40:42.151923       8 log.go:172] (0xc00309c420) (0xc001bb9a40) Stream removed, broadcasting: 5
I0126 22:40:42.152000       8 log.go:172] (0xc0021fc320) (1) Data frame handling
I0126 22:40:42.152045       8 log.go:172] (0xc0021fc320) (1) Data frame sent
I0126 22:40:42.152059       8 log.go:172] (0xc00309c420) (0xc0021fc320) Stream removed, broadcasting: 1
I0126 22:40:42.152077       8 log.go:172] (0xc00309c420) Go away received
I0126 22:40:42.153392       8 log.go:172] (0xc00309c420) (0xc0021fc320) Stream removed, broadcasting: 1
I0126 22:40:42.153520       8 log.go:172] (0xc00309c420) (0xc001aa57c0) Stream removed, broadcasting: 3
I0126 22:40:42.153613       8 log.go:172] (0xc00309c420) (0xc001bb9a40) Stream removed, broadcasting: 5
Jan 26 22:40:42.153: INFO: Deleting pod dns-775...
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:40:42.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-775" for this suite.

• [SLOW TEST:6.833 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":241,"skipped":4079,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:40:42.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8890 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8890;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8890 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8890;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8890.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8890.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8890.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8890.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8890.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8890.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8890.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8890.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8890.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8890.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8890.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8890.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8890.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 152.122.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.122.152_udp@PTR;check="$$(dig +tcp +noall +answer +search 152.122.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.122.152_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8890 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8890;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8890 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8890;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8890.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8890.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8890.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8890.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8890.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8890.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8890.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8890.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8890.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8890.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8890.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8890.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8890.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 152.122.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.122.152_udp@PTR;check="$$(dig +tcp +noall +answer +search 152.122.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.122.152_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 26 22:40:56.651: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:40:56.668: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:40:56.676: INFO: Unable to read wheezy_udp@dns-test-service.dns-8890 from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:40:56.681: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8890 from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:40:56.687: INFO: Unable to read wheezy_udp@dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:40:56.691: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:40:56.695: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:40:56.699: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:40:56.741: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:40:56.745: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:40:56.750: INFO: Unable to read jessie_udp@dns-test-service.dns-8890 from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:40:56.754: INFO: Unable to read jessie_tcp@dns-test-service.dns-8890 from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:40:56.758: INFO: Unable to read jessie_udp@dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:40:56.764: INFO: Unable to read jessie_tcp@dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:40:56.769: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:40:56.774: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:40:56.830: INFO: Lookups using dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8890 wheezy_tcp@dns-test-service.dns-8890 wheezy_udp@dns-test-service.dns-8890.svc wheezy_tcp@dns-test-service.dns-8890.svc wheezy_udp@_http._tcp.dns-test-service.dns-8890.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8890.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8890 jessie_tcp@dns-test-service.dns-8890 jessie_udp@dns-test-service.dns-8890.svc jessie_tcp@dns-test-service.dns-8890.svc jessie_udp@_http._tcp.dns-test-service.dns-8890.svc jessie_tcp@_http._tcp.dns-test-service.dns-8890.svc]

Jan 26 22:41:01.840: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:01.844: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:01.849: INFO: Unable to read wheezy_udp@dns-test-service.dns-8890 from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:01.860: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8890 from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:01.876: INFO: Unable to read wheezy_udp@dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:01.883: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:01.889: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:01.896: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:01.923: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:01.927: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:01.931: INFO: Unable to read jessie_udp@dns-test-service.dns-8890 from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:01.936: INFO: Unable to read jessie_tcp@dns-test-service.dns-8890 from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:01.943: INFO: Unable to read jessie_udp@dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:01.948: INFO: Unable to read jessie_tcp@dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:01.952: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:01.954: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:01.981: INFO: Lookups using dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8890 wheezy_tcp@dns-test-service.dns-8890 wheezy_udp@dns-test-service.dns-8890.svc wheezy_tcp@dns-test-service.dns-8890.svc wheezy_udp@_http._tcp.dns-test-service.dns-8890.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8890.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8890 jessie_tcp@dns-test-service.dns-8890 jessie_udp@dns-test-service.dns-8890.svc jessie_tcp@dns-test-service.dns-8890.svc jessie_udp@_http._tcp.dns-test-service.dns-8890.svc jessie_tcp@_http._tcp.dns-test-service.dns-8890.svc]

Jan 26 22:41:06.842: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:06.850: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:06.866: INFO: Unable to read wheezy_udp@dns-test-service.dns-8890 from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:06.872: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8890 from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:06.877: INFO: Unable to read wheezy_udp@dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:06.883: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:06.888: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:06.893: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:06.929: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:06.935: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:06.941: INFO: Unable to read jessie_udp@dns-test-service.dns-8890 from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:06.947: INFO: Unable to read jessie_tcp@dns-test-service.dns-8890 from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:06.954: INFO: Unable to read jessie_udp@dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:06.965: INFO: Unable to read jessie_tcp@dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:06.970: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:06.975: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:06.996: INFO: Lookups using dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8890 wheezy_tcp@dns-test-service.dns-8890 wheezy_udp@dns-test-service.dns-8890.svc wheezy_tcp@dns-test-service.dns-8890.svc wheezy_udp@_http._tcp.dns-test-service.dns-8890.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8890.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8890 jessie_tcp@dns-test-service.dns-8890 jessie_udp@dns-test-service.dns-8890.svc jessie_tcp@dns-test-service.dns-8890.svc jessie_udp@_http._tcp.dns-test-service.dns-8890.svc jessie_tcp@_http._tcp.dns-test-service.dns-8890.svc]

Jan 26 22:41:11.841: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:11.846: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:11.856: INFO: Unable to read wheezy_udp@dns-test-service.dns-8890 from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:11.869: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8890 from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:11.878: INFO: Unable to read wheezy_udp@dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:11.892: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:11.896: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:11.900: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:11.926: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:11.953: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:11.987: INFO: Unable to read jessie_udp@dns-test-service.dns-8890 from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:11.991: INFO: Unable to read jessie_tcp@dns-test-service.dns-8890 from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:11.995: INFO: Unable to read jessie_udp@dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:12.000: INFO: Unable to read jessie_tcp@dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:12.003: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:12.007: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:12.046: INFO: Lookups using dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8890 wheezy_tcp@dns-test-service.dns-8890 wheezy_udp@dns-test-service.dns-8890.svc wheezy_tcp@dns-test-service.dns-8890.svc wheezy_udp@_http._tcp.dns-test-service.dns-8890.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8890.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8890 jessie_tcp@dns-test-service.dns-8890 jessie_udp@dns-test-service.dns-8890.svc jessie_tcp@dns-test-service.dns-8890.svc jessie_udp@_http._tcp.dns-test-service.dns-8890.svc jessie_tcp@_http._tcp.dns-test-service.dns-8890.svc]

Jan 26 22:41:16.844: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:16.858: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:16.869: INFO: Unable to read wheezy_udp@dns-test-service.dns-8890 from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:16.875: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8890 from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:16.882: INFO: Unable to read wheezy_udp@dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:16.888: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:16.894: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:16.899: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:16.941: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:16.945: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:16.949: INFO: Unable to read jessie_udp@dns-test-service.dns-8890 from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:16.954: INFO: Unable to read jessie_tcp@dns-test-service.dns-8890 from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:16.963: INFO: Unable to read jessie_udp@dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:16.968: INFO: Unable to read jessie_tcp@dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:16.974: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:16.978: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:17.029: INFO: Lookups using dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8890 wheezy_tcp@dns-test-service.dns-8890 wheezy_udp@dns-test-service.dns-8890.svc wheezy_tcp@dns-test-service.dns-8890.svc wheezy_udp@_http._tcp.dns-test-service.dns-8890.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8890.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8890 jessie_tcp@dns-test-service.dns-8890 jessie_udp@dns-test-service.dns-8890.svc jessie_tcp@dns-test-service.dns-8890.svc jessie_udp@_http._tcp.dns-test-service.dns-8890.svc jessie_tcp@_http._tcp.dns-test-service.dns-8890.svc]

Jan 26 22:41:21.869: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:21.880: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:21.886: INFO: Unable to read wheezy_udp@dns-test-service.dns-8890 from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:21.893: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8890 from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:21.901: INFO: Unable to read wheezy_udp@dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:21.911: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:21.919: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:21.925: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:21.971: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:21.976: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:21.981: INFO: Unable to read jessie_udp@dns-test-service.dns-8890 from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:21.985: INFO: Unable to read jessie_tcp@dns-test-service.dns-8890 from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:21.991: INFO: Unable to read jessie_udp@dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:21.997: INFO: Unable to read jessie_tcp@dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:22.040: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:22.047: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8890.svc from pod dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d: the server could not find the requested resource (get pods dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d)
Jan 26 22:41:22.091: INFO: Lookups using dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8890 wheezy_tcp@dns-test-service.dns-8890 wheezy_udp@dns-test-service.dns-8890.svc wheezy_tcp@dns-test-service.dns-8890.svc wheezy_udp@_http._tcp.dns-test-service.dns-8890.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8890.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8890 jessie_tcp@dns-test-service.dns-8890 jessie_udp@dns-test-service.dns-8890.svc jessie_tcp@dns-test-service.dns-8890.svc jessie_udp@_http._tcp.dns-test-service.dns-8890.svc jessie_tcp@_http._tcp.dns-test-service.dns-8890.svc]

Jan 26 22:41:27.095: INFO: DNS probes using dns-8890/dns-test-2ab4bde2-7195-4f9d-af53-0e24a11a2a3d succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:41:27.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8890" for this suite.

• [SLOW TEST:45.548 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":242,"skipped":4101,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:41:27.754: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with configMap that has name projected-configmap-test-upd-2f697939-c5f5-410a-9001-d02c943dd059
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-2f697939-c5f5-410a-9001-d02c943dd059
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:42:55.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8763" for this suite.

• [SLOW TEST:87.368 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":243,"skipped":4131,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:42:55.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jan 26 22:42:55.932: INFO: Pod name wrapped-volume-race-3ed858af-d0a8-4f6f-80f3-c49ab2c5651b: Found 0 pods out of 5
Jan 26 22:43:00.944: INFO: Pod name wrapped-volume-race-3ed858af-d0a8-4f6f-80f3-c49ab2c5651b: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-3ed858af-d0a8-4f6f-80f3-c49ab2c5651b in namespace emptydir-wrapper-9603, will wait for the garbage collector to delete the pods
Jan 26 22:43:27.072: INFO: Deleting ReplicationController wrapped-volume-race-3ed858af-d0a8-4f6f-80f3-c49ab2c5651b took: 8.744256ms
Jan 26 22:43:27.472: INFO: Terminating ReplicationController wrapped-volume-race-3ed858af-d0a8-4f6f-80f3-c49ab2c5651b pods took: 400.431012ms
STEP: Creating RC which spawns configmap-volume pods
Jan 26 22:43:38.158: INFO: Pod name wrapped-volume-race-18567e2b-48b4-4079-8501-09b611a4a399: Found 0 pods out of 5
Jan 26 22:43:43.169: INFO: Pod name wrapped-volume-race-18567e2b-48b4-4079-8501-09b611a4a399: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-18567e2b-48b4-4079-8501-09b611a4a399 in namespace emptydir-wrapper-9603, will wait for the garbage collector to delete the pods
Jan 26 22:44:11.284: INFO: Deleting ReplicationController wrapped-volume-race-18567e2b-48b4-4079-8501-09b611a4a399 took: 18.019413ms
Jan 26 22:44:11.685: INFO: Terminating ReplicationController wrapped-volume-race-18567e2b-48b4-4079-8501-09b611a4a399 pods took: 400.496128ms
STEP: Creating RC which spawns configmap-volume pods
Jan 26 22:44:23.747: INFO: Pod name wrapped-volume-race-7310ccf0-8060-4d4b-a0fa-6d1ecde38524: Found 0 pods out of 5
Jan 26 22:44:28.760: INFO: Pod name wrapped-volume-race-7310ccf0-8060-4d4b-a0fa-6d1ecde38524: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-7310ccf0-8060-4d4b-a0fa-6d1ecde38524 in namespace emptydir-wrapper-9603, will wait for the garbage collector to delete the pods
Jan 26 22:44:56.869: INFO: Deleting ReplicationController wrapped-volume-race-7310ccf0-8060-4d4b-a0fa-6d1ecde38524 took: 19.249613ms
Jan 26 22:44:57.370: INFO: Terminating ReplicationController wrapped-volume-race-7310ccf0-8060-4d4b-a0fa-6d1ecde38524 pods took: 501.069818ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:45:13.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-9603" for this suite.

• [SLOW TEST:138.895 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":244,"skipped":4141,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:45:14.019: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Jan 26 22:45:15.333: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Jan 26 22:45:17.350: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675515, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675515, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675515, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675515, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 22:45:19.358: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675515, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675515, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675515, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675515, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 22:45:21.361: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675515, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675515, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675515, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675515, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 22:45:23.357: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675515, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675515, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675515, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675515, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 26 22:45:26.417: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 26 22:45:26.426: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:45:27.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-82" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:13.825 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":245,"skipped":4143,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:45:27.844: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 26 22:45:27.977: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jan 26 22:45:27.997: INFO: Number of nodes with available pods: 0
Jan 26 22:45:27.997: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:45:32.673: INFO: Number of nodes with available pods: 0
Jan 26 22:45:32.673: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:45:33.985: INFO: Number of nodes with available pods: 0
Jan 26 22:45:33.986: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:45:35.706: INFO: Number of nodes with available pods: 0
Jan 26 22:45:35.706: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:45:36.229: INFO: Number of nodes with available pods: 0
Jan 26 22:45:36.229: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:45:37.229: INFO: Number of nodes with available pods: 0
Jan 26 22:45:37.229: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:45:38.008: INFO: Number of nodes with available pods: 0
Jan 26 22:45:38.008: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:45:39.013: INFO: Number of nodes with available pods: 0
Jan 26 22:45:39.013: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:45:40.122: INFO: Number of nodes with available pods: 1
Jan 26 22:45:40.122: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 26 22:45:41.013: INFO: Number of nodes with available pods: 1
Jan 26 22:45:41.014: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 26 22:45:42.013: INFO: Number of nodes with available pods: 1
Jan 26 22:45:42.014: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 26 22:45:43.013: INFO: Number of nodes with available pods: 2
Jan 26 22:45:43.013: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jan 26 22:45:43.150: INFO: Wrong image for pod: daemon-set-blkwn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 26 22:45:43.150: INFO: Wrong image for pod: daemon-set-kmkq6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 26 22:45:44.181: INFO: Wrong image for pod: daemon-set-blkwn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 26 22:45:44.181: INFO: Wrong image for pod: daemon-set-kmkq6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 26 22:45:45.243: INFO: Wrong image for pod: daemon-set-blkwn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 26 22:45:45.243: INFO: Wrong image for pod: daemon-set-kmkq6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 26 22:45:46.180: INFO: Wrong image for pod: daemon-set-blkwn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 26 22:45:46.180: INFO: Wrong image for pod: daemon-set-kmkq6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 26 22:45:47.180: INFO: Wrong image for pod: daemon-set-blkwn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 26 22:45:47.180: INFO: Wrong image for pod: daemon-set-kmkq6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 26 22:45:48.185: INFO: Wrong image for pod: daemon-set-blkwn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 26 22:45:48.185: INFO: Pod daemon-set-blkwn is not available
Jan 26 22:45:48.185: INFO: Wrong image for pod: daemon-set-kmkq6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 26 22:45:49.181: INFO: Wrong image for pod: daemon-set-blkwn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 26 22:45:49.181: INFO: Pod daemon-set-blkwn is not available
Jan 26 22:45:49.181: INFO: Wrong image for pod: daemon-set-kmkq6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 26 22:45:50.182: INFO: Wrong image for pod: daemon-set-blkwn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 26 22:45:50.182: INFO: Pod daemon-set-blkwn is not available
Jan 26 22:45:50.182: INFO: Wrong image for pod: daemon-set-kmkq6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 26 22:45:51.182: INFO: Wrong image for pod: daemon-set-blkwn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 26 22:45:51.182: INFO: Pod daemon-set-blkwn is not available
Jan 26 22:45:51.182: INFO: Wrong image for pod: daemon-set-kmkq6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 26 22:45:52.182: INFO: Wrong image for pod: daemon-set-blkwn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 26 22:45:52.182: INFO: Pod daemon-set-blkwn is not available
Jan 26 22:45:52.182: INFO: Wrong image for pod: daemon-set-kmkq6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 26 22:45:53.195: INFO: Pod daemon-set-g2v6z is not available
Jan 26 22:45:53.195: INFO: Wrong image for pod: daemon-set-kmkq6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 26 22:45:54.538: INFO: Pod daemon-set-g2v6z is not available
Jan 26 22:45:54.539: INFO: Wrong image for pod: daemon-set-kmkq6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 26 22:45:55.600: INFO: Pod daemon-set-g2v6z is not available
Jan 26 22:45:55.600: INFO: Wrong image for pod: daemon-set-kmkq6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 26 22:45:56.180: INFO: Pod daemon-set-g2v6z is not available
Jan 26 22:45:56.180: INFO: Wrong image for pod: daemon-set-kmkq6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 26 22:45:57.181: INFO: Pod daemon-set-g2v6z is not available
Jan 26 22:45:57.181: INFO: Wrong image for pod: daemon-set-kmkq6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 26 22:45:58.184: INFO: Pod daemon-set-g2v6z is not available
Jan 26 22:45:58.184: INFO: Wrong image for pod: daemon-set-kmkq6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 26 22:45:59.297: INFO: Pod daemon-set-g2v6z is not available
Jan 26 22:45:59.297: INFO: Wrong image for pod: daemon-set-kmkq6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 26 22:46:00.185: INFO: Wrong image for pod: daemon-set-kmkq6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 26 22:46:01.192: INFO: Wrong image for pod: daemon-set-kmkq6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 26 22:46:02.191: INFO: Wrong image for pod: daemon-set-kmkq6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 26 22:46:03.180: INFO: Wrong image for pod: daemon-set-kmkq6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 26 22:46:04.183: INFO: Wrong image for pod: daemon-set-kmkq6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 26 22:46:04.183: INFO: Pod daemon-set-kmkq6 is not available
Jan 26 22:46:05.185: INFO: Wrong image for pod: daemon-set-kmkq6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 26 22:46:05.185: INFO: Pod daemon-set-kmkq6 is not available
Jan 26 22:46:06.183: INFO: Wrong image for pod: daemon-set-kmkq6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 26 22:46:06.183: INFO: Pod daemon-set-kmkq6 is not available
Jan 26 22:46:07.182: INFO: Wrong image for pod: daemon-set-kmkq6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 26 22:46:07.182: INFO: Pod daemon-set-kmkq6 is not available
Jan 26 22:46:08.182: INFO: Wrong image for pod: daemon-set-kmkq6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 26 22:46:08.182: INFO: Pod daemon-set-kmkq6 is not available
Jan 26 22:46:09.182: INFO: Wrong image for pod: daemon-set-kmkq6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 26 22:46:09.182: INFO: Pod daemon-set-kmkq6 is not available
Jan 26 22:46:10.181: INFO: Wrong image for pod: daemon-set-kmkq6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 26 22:46:10.181: INFO: Pod daemon-set-kmkq6 is not available
Jan 26 22:46:11.180: INFO: Wrong image for pod: daemon-set-kmkq6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 26 22:46:11.180: INFO: Pod daemon-set-kmkq6 is not available
Jan 26 22:46:12.183: INFO: Wrong image for pod: daemon-set-kmkq6. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 26 22:46:12.184: INFO: Pod daemon-set-kmkq6 is not available
Jan 26 22:46:13.186: INFO: Pod daemon-set-8v7tk is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Jan 26 22:46:13.203: INFO: Number of nodes with available pods: 1
Jan 26 22:46:13.203: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:46:14.214: INFO: Number of nodes with available pods: 1
Jan 26 22:46:14.214: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:46:15.235: INFO: Number of nodes with available pods: 1
Jan 26 22:46:15.235: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:46:16.217: INFO: Number of nodes with available pods: 1
Jan 26 22:46:16.217: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:46:17.218: INFO: Number of nodes with available pods: 1
Jan 26 22:46:17.218: INFO: Node jerma-node is running more than one daemon pod
Jan 26 22:46:18.222: INFO: Number of nodes with available pods: 2
Jan 26 22:46:18.222: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5783, will wait for the garbage collector to delete the pods
Jan 26 22:46:18.326: INFO: Deleting DaemonSet.extensions daemon-set took: 9.835305ms
Jan 26 22:46:18.627: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.481468ms
Jan 26 22:46:25.356: INFO: Number of nodes with available pods: 0
Jan 26 22:46:25.356: INFO: Number of running nodes: 0, number of available pods: 0
Jan 26 22:46:25.360: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5783/daemonsets","resourceVersion":"4558782"},"items":null}

Jan 26 22:46:25.363: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5783/pods","resourceVersion":"4558782"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:46:25.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5783" for this suite.

• [SLOW TEST:57.537 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":246,"skipped":4150,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:46:25.382: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service nodeport-service with the type=NodePort in namespace services-9770
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-9770
STEP: creating replication controller externalsvc in namespace services-9770
I0126 22:46:25.814657       8 runners.go:189] Created replication controller with name: externalsvc, namespace: services-9770, replica count: 2
I0126 22:46:28.866275       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0126 22:46:31.867494       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0126 22:46:34.868109       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0126 22:46:37.868724       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Jan 26 22:46:37.920: INFO: Creating new exec pod
Jan 26 22:46:45.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9770 execpodcqfsr -- /bin/sh -x -c nslookup nodeport-service'
Jan 26 22:46:48.323: INFO: stderr: "I0126 22:46:48.139215    4429 log.go:172] (0xc00011a2c0) (0xc00064e640) Create stream\nI0126 22:46:48.139536    4429 log.go:172] (0xc00011a2c0) (0xc00064e640) Stream added, broadcasting: 1\nI0126 22:46:48.146310    4429 log.go:172] (0xc00011a2c0) Reply frame received for 1\nI0126 22:46:48.146474    4429 log.go:172] (0xc00011a2c0) (0xc000523400) Create stream\nI0126 22:46:48.146501    4429 log.go:172] (0xc00011a2c0) (0xc000523400) Stream added, broadcasting: 3\nI0126 22:46:48.149499    4429 log.go:172] (0xc00011a2c0) Reply frame received for 3\nI0126 22:46:48.149623    4429 log.go:172] (0xc00011a2c0) (0xc0008dc0a0) Create stream\nI0126 22:46:48.149655    4429 log.go:172] (0xc00011a2c0) (0xc0008dc0a0) Stream added, broadcasting: 5\nI0126 22:46:48.152394    4429 log.go:172] (0xc00011a2c0) Reply frame received for 5\nI0126 22:46:48.228357    4429 log.go:172] (0xc00011a2c0) Data frame received for 5\nI0126 22:46:48.228420    4429 log.go:172] (0xc0008dc0a0) (5) Data frame handling\nI0126 22:46:48.228449    4429 log.go:172] (0xc0008dc0a0) (5) Data frame sent\nI0126 22:46:48.228462    4429 log.go:172] (0xc00011a2c0) Data frame received for 5\nI0126 22:46:48.228473    4429 log.go:172] (0xc0008dc0a0) (5) Data frame handling\n+ nslookup nodeport-serviceI0126 22:46:48.228519    4429 log.go:172] (0xc0008dc0a0) (5) Data frame sent\nI0126 22:46:48.228532    4429 log.go:172] (0xc00011a2c0) Data frame received for 5\nI0126 22:46:48.228546    4429 log.go:172] (0xc0008dc0a0) (5) Data frame handling\nI0126 22:46:48.228559    4429 log.go:172] (0xc0008dc0a0) (5) Data frame sent\n\nI0126 22:46:48.242727    4429 log.go:172] (0xc00011a2c0) Data frame received for 3\nI0126 22:46:48.242830    4429 log.go:172] (0xc000523400) (3) Data frame handling\nI0126 22:46:48.242860    4429 log.go:172] (0xc000523400) (3) Data frame sent\nI0126 22:46:48.245420    4429 log.go:172] (0xc00011a2c0) Data frame received for 3\nI0126 22:46:48.245441    4429 log.go:172] (0xc000523400) (3) Data frame handling\nI0126 22:46:48.245454    4429 log.go:172] (0xc000523400) (3) Data frame sent\nI0126 22:46:48.309475    4429 log.go:172] (0xc00011a2c0) Data frame received for 1\nI0126 22:46:48.309722    4429 log.go:172] (0xc00064e640) (1) Data frame handling\nI0126 22:46:48.309753    4429 log.go:172] (0xc00064e640) (1) Data frame sent\nI0126 22:46:48.309775    4429 log.go:172] (0xc00011a2c0) (0xc00064e640) Stream removed, broadcasting: 1\nI0126 22:46:48.311043    4429 log.go:172] (0xc00011a2c0) (0xc000523400) Stream removed, broadcasting: 3\nI0126 22:46:48.311237    4429 log.go:172] (0xc00011a2c0) (0xc0008dc0a0) Stream removed, broadcasting: 5\nI0126 22:46:48.311271    4429 log.go:172] (0xc00011a2c0) (0xc00064e640) Stream removed, broadcasting: 1\nI0126 22:46:48.311280    4429 log.go:172] (0xc00011a2c0) (0xc000523400) Stream removed, broadcasting: 3\nI0126 22:46:48.311287    4429 log.go:172] (0xc00011a2c0) (0xc0008dc0a0) Stream removed, broadcasting: 5\n"
Jan 26 22:46:48.324: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-9770.svc.cluster.local\tcanonical name = externalsvc.services-9770.svc.cluster.local.\nName:\texternalsvc.services-9770.svc.cluster.local\nAddress: 10.96.4.247\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-9770, will wait for the garbage collector to delete the pods
Jan 26 22:46:48.387: INFO: Deleting ReplicationController externalsvc took: 5.886248ms
Jan 26 22:46:48.788: INFO: Terminating ReplicationController externalsvc pods took: 400.463229ms
Jan 26 22:47:03.276: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:47:03.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9770" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:37.982 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":247,"skipped":4159,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:47:03.364: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 26 22:47:03.509: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:47:13.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3123" for this suite.

• [SLOW TEST:10.267 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":248,"skipped":4159,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:47:13.633: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-360f2255-b960-4711-aa57-a2381edb42df
STEP: Creating a pod to test consume configMaps
Jan 26 22:47:13.816: INFO: Waiting up to 5m0s for pod "pod-configmaps-b339c029-7a96-4c38-9f1f-47745922f93f" in namespace "configmap-8290" to be "success or failure"
Jan 26 22:47:13.838: INFO: Pod "pod-configmaps-b339c029-7a96-4c38-9f1f-47745922f93f": Phase="Pending", Reason="", readiness=false. Elapsed: 22.585855ms
Jan 26 22:47:15.883: INFO: Pod "pod-configmaps-b339c029-7a96-4c38-9f1f-47745922f93f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067681996s
Jan 26 22:47:17.891: INFO: Pod "pod-configmaps-b339c029-7a96-4c38-9f1f-47745922f93f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074915658s
Jan 26 22:47:19.895: INFO: Pod "pod-configmaps-b339c029-7a96-4c38-9f1f-47745922f93f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079706875s
Jan 26 22:47:21.901: INFO: Pod "pod-configmaps-b339c029-7a96-4c38-9f1f-47745922f93f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.08572475s
STEP: Saw pod success
Jan 26 22:47:21.901: INFO: Pod "pod-configmaps-b339c029-7a96-4c38-9f1f-47745922f93f" satisfied condition "success or failure"
Jan 26 22:47:21.905: INFO: Trying to get logs from node jerma-node pod pod-configmaps-b339c029-7a96-4c38-9f1f-47745922f93f container configmap-volume-test: 
STEP: delete the pod
Jan 26 22:47:22.077: INFO: Waiting for pod pod-configmaps-b339c029-7a96-4c38-9f1f-47745922f93f to disappear
Jan 26 22:47:22.091: INFO: Pod pod-configmaps-b339c029-7a96-4c38-9f1f-47745922f93f no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:47:22.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8290" for this suite.

• [SLOW TEST:8.512 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":249,"skipped":4204,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:47:22.147: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override all
Jan 26 22:47:22.226: INFO: Waiting up to 5m0s for pod "client-containers-4613b8ce-3df9-4254-b982-154cd324363f" in namespace "containers-8115" to be "success or failure"
Jan 26 22:47:22.273: INFO: Pod "client-containers-4613b8ce-3df9-4254-b982-154cd324363f": Phase="Pending", Reason="", readiness=false. Elapsed: 46.906448ms
Jan 26 22:47:24.277: INFO: Pod "client-containers-4613b8ce-3df9-4254-b982-154cd324363f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051681643s
Jan 26 22:47:26.286: INFO: Pod "client-containers-4613b8ce-3df9-4254-b982-154cd324363f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060796458s
Jan 26 22:47:28.292: INFO: Pod "client-containers-4613b8ce-3df9-4254-b982-154cd324363f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066106496s
Jan 26 22:47:30.409: INFO: Pod "client-containers-4613b8ce-3df9-4254-b982-154cd324363f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.18307374s
STEP: Saw pod success
Jan 26 22:47:30.409: INFO: Pod "client-containers-4613b8ce-3df9-4254-b982-154cd324363f" satisfied condition "success or failure"
Jan 26 22:47:30.414: INFO: Trying to get logs from node jerma-node pod client-containers-4613b8ce-3df9-4254-b982-154cd324363f container test-container: 
STEP: delete the pod
Jan 26 22:47:30.481: INFO: Waiting for pod client-containers-4613b8ce-3df9-4254-b982-154cd324363f to disappear
Jan 26 22:47:30.498: INFO: Pod client-containers-4613b8ce-3df9-4254-b982-154cd324363f no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:47:30.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8115" for this suite.

• [SLOW TEST:8.421 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":250,"skipped":4206,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:47:30.569: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 26 22:47:30.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-2586'
Jan 26 22:47:31.004: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 26 22:47:31.004: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created
STEP: confirm that you can get logs from an rc
Jan 26 22:47:31.047: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-dnh64]
Jan 26 22:47:31.047: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-dnh64" in namespace "kubectl-2586" to be "running and ready"
Jan 26 22:47:31.057: INFO: Pod "e2e-test-httpd-rc-dnh64": Phase="Pending", Reason="", readiness=false. Elapsed: 10.531837ms
Jan 26 22:47:33.071: INFO: Pod "e2e-test-httpd-rc-dnh64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023675921s
Jan 26 22:47:35.080: INFO: Pod "e2e-test-httpd-rc-dnh64": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033536591s
Jan 26 22:47:37.092: INFO: Pod "e2e-test-httpd-rc-dnh64": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044927836s
Jan 26 22:47:39.102: INFO: Pod "e2e-test-httpd-rc-dnh64": Phase="Running", Reason="", readiness=true. Elapsed: 8.055102665s
Jan 26 22:47:39.102: INFO: Pod "e2e-test-httpd-rc-dnh64" satisfied condition "running and ready"
Jan 26 22:47:39.102: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-dnh64]
Jan 26 22:47:39.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-2586'
Jan 26 22:47:39.313: INFO: stderr: ""
Jan 26 22:47:39.314: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.2. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.2. Set the 'ServerName' directive globally to suppress this message\n[Sun Jan 26 22:47:37.346340 2020] [mpm_event:notice] [pid 1:tid 140108054395752] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Sun Jan 26 22:47:37.346418 2020] [core:notice] [pid 1:tid 140108054395752] AH00094: Command line: 'httpd -D FOREGROUND'\n"
[AfterEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Jan 26 22:47:39.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-2586'
Jan 26 22:47:39.455: INFO: stderr: ""
Jan 26 22:47:39.455: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:47:39.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2586" for this suite.

• [SLOW TEST:8.906 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1608
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image  [Conformance]","total":278,"completed":251,"skipped":4213,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:47:39.476: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap that has name configmap-test-emptyKey-a71dffd3-d240-480b-84c4-b50a8261dd36
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:47:39.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7939" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":252,"skipped":4217,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:47:39.581: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Jan 26 22:47:39.638: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the sample API server.
Jan 26 22:47:40.337: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Jan 26 22:47:42.477: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675660, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675660, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675660, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675660, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 22:47:44.492: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675660, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675660, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675660, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675660, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 22:47:46.487: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675660, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675660, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675660, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675660, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 22:47:48.491: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675660, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675660, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675660, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675660, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 22:47:50.487: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675660, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675660, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675660, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675660, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 22:47:53.331: INFO: Waited 829.236055ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:47:53.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-4188" for this suite.

• [SLOW TEST:14.328 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":253,"skipped":4225,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:47:53.912: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 26 22:48:03.260: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:48:03.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6722" for this suite.

• [SLOW TEST:9.484 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":254,"skipped":4275,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:48:03.397: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:48:03.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3524" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":255,"skipped":4278,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:48:03.662: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-gnkhl in namespace proxy-476
I0126 22:48:03.946092       8 runners.go:189] Created replication controller with name: proxy-service-gnkhl, namespace: proxy-476, replica count: 1
I0126 22:48:04.999523       8 runners.go:189] proxy-service-gnkhl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0126 22:48:05.999939       8 runners.go:189] proxy-service-gnkhl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0126 22:48:07.000252       8 runners.go:189] proxy-service-gnkhl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0126 22:48:08.000534       8 runners.go:189] proxy-service-gnkhl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0126 22:48:09.001198       8 runners.go:189] proxy-service-gnkhl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0126 22:48:10.001668       8 runners.go:189] proxy-service-gnkhl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0126 22:48:11.002043       8 runners.go:189] proxy-service-gnkhl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0126 22:48:12.002456       8 runners.go:189] proxy-service-gnkhl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0126 22:48:13.002902       8 runners.go:189] proxy-service-gnkhl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0126 22:48:14.003459       8 runners.go:189] proxy-service-gnkhl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0126 22:48:15.003830       8 runners.go:189] proxy-service-gnkhl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0126 22:48:16.004227       8 runners.go:189] proxy-service-gnkhl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 26 22:48:16.013: INFO: setup took 12.183139017s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jan 26 22:48:16.040: INFO: (0) /api/v1/namespaces/proxy-476/services/proxy-service-gnkhl:portname2/proxy/: bar (200; 26.32709ms)
Jan 26 22:48:16.040: INFO: (0) /api/v1/namespaces/proxy-476/services/proxy-service-gnkhl:portname1/proxy/: foo (200; 26.527158ms)
Jan 26 22:48:16.040: INFO: (0) /api/v1/namespaces/proxy-476/pods/proxy-service-gnkhl-tgltj:160/proxy/: foo (200; 26.810318ms)
Jan 26 22:48:16.040: INFO: (0) /api/v1/namespaces/proxy-476/pods/http:proxy-service-gnkhl-tgltj:1080/proxy/: t... (200; 26.353363ms)
Jan 26 22:48:16.040: INFO: (0) /api/v1/namespaces/proxy-476/pods/http:proxy-service-gnkhl-tgltj:160/proxy/: foo (200; 26.883531ms)
Jan 26 22:48:16.040: INFO: (0) /api/v1/namespaces/proxy-476/pods/proxy-service-gnkhl-tgltj/proxy/: test (200; 26.497725ms)
Jan 26 22:48:16.043: INFO: (0) /api/v1/namespaces/proxy-476/pods/proxy-service-gnkhl-tgltj:1080/proxy/: testtest (200; 15.141965ms)
Jan 26 22:48:16.069: INFO: (1) /api/v1/namespaces/proxy-476/pods/http:proxy-service-gnkhl-tgltj:1080/proxy/: t... (200; 14.929882ms)
Jan 26 22:48:16.069: INFO: (1) /api/v1/namespaces/proxy-476/pods/https:proxy-service-gnkhl-tgltj:443/proxy/: testtestt... (200; 11.029557ms)
Jan 26 22:48:16.081: INFO: (2) /api/v1/namespaces/proxy-476/services/proxy-service-gnkhl:portname1/proxy/: foo (200; 11.076953ms)
Jan 26 22:48:16.081: INFO: (2) /api/v1/namespaces/proxy-476/pods/http:proxy-service-gnkhl-tgltj:160/proxy/: foo (200; 11.54501ms)
Jan 26 22:48:16.081: INFO: (2) /api/v1/namespaces/proxy-476/services/proxy-service-gnkhl:portname2/proxy/: bar (200; 11.559328ms)
Jan 26 22:48:16.081: INFO: (2) /api/v1/namespaces/proxy-476/pods/http:proxy-service-gnkhl-tgltj:162/proxy/: bar (200; 11.661157ms)
Jan 26 22:48:16.081: INFO: (2) /api/v1/namespaces/proxy-476/pods/proxy-service-gnkhl-tgltj/proxy/: test (200; 11.572049ms)
Jan 26 22:48:16.081: INFO: (2) /api/v1/namespaces/proxy-476/pods/https:proxy-service-gnkhl-tgltj:460/proxy/: tls baz (200; 11.889034ms)
Jan 26 22:48:16.081: INFO: (2) /api/v1/namespaces/proxy-476/pods/proxy-service-gnkhl-tgltj:160/proxy/: foo (200; 11.784316ms)
Jan 26 22:48:16.081: INFO: (2) /api/v1/namespaces/proxy-476/services/https:proxy-service-gnkhl:tlsportname1/proxy/: tls baz (200; 12.025026ms)
Jan 26 22:48:16.081: INFO: (2) /api/v1/namespaces/proxy-476/services/http:proxy-service-gnkhl:portname1/proxy/: foo (200; 11.911358ms)
Jan 26 22:48:16.089: INFO: (3) /api/v1/namespaces/proxy-476/pods/proxy-service-gnkhl-tgltj/proxy/: test (200; 7.104526ms)
Jan 26 22:48:16.089: INFO: (3) /api/v1/namespaces/proxy-476/pods/http:proxy-service-gnkhl-tgltj:160/proxy/: foo (200; 6.674856ms)
Jan 26 22:48:16.090: INFO: (3) /api/v1/namespaces/proxy-476/pods/http:proxy-service-gnkhl-tgltj:1080/proxy/: t... (200; 7.821224ms)
Jan 26 22:48:16.090: INFO: (3) /api/v1/namespaces/proxy-476/pods/https:proxy-service-gnkhl-tgltj:460/proxy/: tls baz (200; 8.330381ms)
Jan 26 22:48:16.090: INFO: (3) /api/v1/namespaces/proxy-476/pods/https:proxy-service-gnkhl-tgltj:462/proxy/: tls qux (200; 7.523736ms)
Jan 26 22:48:16.090: INFO: (3) /api/v1/namespaces/proxy-476/pods/proxy-service-gnkhl-tgltj:160/proxy/: foo (200; 7.708254ms)
Jan 26 22:48:16.090: INFO: (3) /api/v1/namespaces/proxy-476/pods/http:proxy-service-gnkhl-tgltj:162/proxy/: bar (200; 7.822984ms)
Jan 26 22:48:16.090: INFO: (3) /api/v1/namespaces/proxy-476/services/http:proxy-service-gnkhl:portname2/proxy/: bar (200; 8.356469ms)
Jan 26 22:48:16.090: INFO: (3) /api/v1/namespaces/proxy-476/pods/proxy-service-gnkhl-tgltj:1080/proxy/: testtesttest (200; 9.235593ms)
Jan 26 22:48:16.133: INFO: (4) /api/v1/namespaces/proxy-476/pods/proxy-service-gnkhl-tgltj:162/proxy/: bar (200; 9.304009ms)
Jan 26 22:48:16.137: INFO: (4) /api/v1/namespaces/proxy-476/pods/http:proxy-service-gnkhl-tgltj:1080/proxy/: t... (200; 13.064381ms)
Jan 26 22:48:16.138: INFO: (4) /api/v1/namespaces/proxy-476/pods/http:proxy-service-gnkhl-tgltj:162/proxy/: bar (200; 13.643775ms)
Jan 26 22:48:16.138: INFO: (4) /api/v1/namespaces/proxy-476/pods/https:proxy-service-gnkhl-tgltj:460/proxy/: tls baz (200; 13.667682ms)
Jan 26 22:48:16.138: INFO: (4) /api/v1/namespaces/proxy-476/pods/http:proxy-service-gnkhl-tgltj:160/proxy/: foo (200; 13.66147ms)
Jan 26 22:48:16.142: INFO: (4) /api/v1/namespaces/proxy-476/services/proxy-service-gnkhl:portname1/proxy/: foo (200; 18.236402ms)
Jan 26 22:48:16.143: INFO: (4) /api/v1/namespaces/proxy-476/services/proxy-service-gnkhl:portname2/proxy/: bar (200; 18.507002ms)
Jan 26 22:48:16.143: INFO: (4) /api/v1/namespaces/proxy-476/services/https:proxy-service-gnkhl:tlsportname1/proxy/: tls baz (200; 19.027643ms)
Jan 26 22:48:16.143: INFO: (4) /api/v1/namespaces/proxy-476/pods/proxy-service-gnkhl-tgltj:160/proxy/: foo (200; 19.290902ms)
Jan 26 22:48:16.146: INFO: (4) /api/v1/namespaces/proxy-476/services/http:proxy-service-gnkhl:portname2/proxy/: bar (200; 21.56461ms)
Jan 26 22:48:16.146: INFO: (4) /api/v1/namespaces/proxy-476/pods/https:proxy-service-gnkhl-tgltj:462/proxy/: tls qux (200; 21.588914ms)
Jan 26 22:48:16.146: INFO: (4) /api/v1/namespaces/proxy-476/services/http:proxy-service-gnkhl:portname1/proxy/: foo (200; 21.711814ms)
Jan 26 22:48:16.146: INFO: (4) /api/v1/namespaces/proxy-476/pods/https:proxy-service-gnkhl-tgltj:443/proxy/: t... (200; 19.433565ms)
Jan 26 22:48:16.166: INFO: (5) /api/v1/namespaces/proxy-476/pods/proxy-service-gnkhl-tgltj:1080/proxy/: testtest (200; 19.761734ms)
Jan 26 22:48:16.166: INFO: (5) /api/v1/namespaces/proxy-476/services/https:proxy-service-gnkhl:tlsportname2/proxy/: tls qux (200; 19.627234ms)
Jan 26 22:48:16.166: INFO: (5) /api/v1/namespaces/proxy-476/services/proxy-service-gnkhl:portname1/proxy/: foo (200; 20.007175ms)
Jan 26 22:48:16.167: INFO: (5) /api/v1/namespaces/proxy-476/pods/http:proxy-service-gnkhl-tgltj:162/proxy/: bar (200; 20.547932ms)
Jan 26 22:48:16.168: INFO: (5) /api/v1/namespaces/proxy-476/services/http:proxy-service-gnkhl:portname2/proxy/: bar (200; 22.06791ms)
Jan 26 22:48:16.177: INFO: (6) /api/v1/namespaces/proxy-476/pods/http:proxy-service-gnkhl-tgltj:1080/proxy/: t... (200; 8.823914ms)
Jan 26 22:48:16.177: INFO: (6) /api/v1/namespaces/proxy-476/pods/proxy-service-gnkhl-tgltj:1080/proxy/: testtest (200; 12.532827ms)
Jan 26 22:48:16.181: INFO: (6) /api/v1/namespaces/proxy-476/services/https:proxy-service-gnkhl:tlsportname1/proxy/: tls baz (200; 12.534901ms)
Jan 26 22:48:16.181: INFO: (6) /api/v1/namespaces/proxy-476/services/proxy-service-gnkhl:portname2/proxy/: bar (200; 12.567183ms)
Jan 26 22:48:16.182: INFO: (6) /api/v1/namespaces/proxy-476/services/proxy-service-gnkhl:portname1/proxy/: foo (200; 13.278003ms)
Jan 26 22:48:16.182: INFO: (6) /api/v1/namespaces/proxy-476/services/https:proxy-service-gnkhl:tlsportname2/proxy/: tls qux (200; 13.419239ms)
Jan 26 22:48:16.183: INFO: (6) /api/v1/namespaces/proxy-476/services/http:proxy-service-gnkhl:portname2/proxy/: bar (200; 14.00141ms)
Jan 26 22:48:16.183: INFO: (6) /api/v1/namespaces/proxy-476/services/http:proxy-service-gnkhl:portname1/proxy/: foo (200; 13.941457ms)
Jan 26 22:48:16.188: INFO: (7) /api/v1/namespaces/proxy-476/pods/http:proxy-service-gnkhl-tgltj:160/proxy/: foo (200; 4.94319ms)
Jan 26 22:48:16.189: INFO: (7) /api/v1/namespaces/proxy-476/pods/https:proxy-service-gnkhl-tgltj:443/proxy/: test (200; 9.556629ms)
Jan 26 22:48:16.192: INFO: (7) /api/v1/namespaces/proxy-476/pods/https:proxy-service-gnkhl-tgltj:462/proxy/: tls qux (200; 9.508332ms)
Jan 26 22:48:16.192: INFO: (7) /api/v1/namespaces/proxy-476/pods/proxy-service-gnkhl-tgltj:160/proxy/: foo (200; 9.469147ms)
Jan 26 22:48:16.192: INFO: (7) /api/v1/namespaces/proxy-476/pods/http:proxy-service-gnkhl-tgltj:1080/proxy/: t... (200; 9.533073ms)
Jan 26 22:48:16.192: INFO: (7) /api/v1/namespaces/proxy-476/pods/http:proxy-service-gnkhl-tgltj:162/proxy/: bar (200; 9.714429ms)
Jan 26 22:48:16.193: INFO: (7) /api/v1/namespaces/proxy-476/pods/proxy-service-gnkhl-tgltj:1080/proxy/: testt... (200; 10.469204ms)
Jan 26 22:48:16.232: INFO: (8) /api/v1/namespaces/proxy-476/pods/http:proxy-service-gnkhl-tgltj:162/proxy/: bar (200; 10.418208ms)
Jan 26 22:48:16.232: INFO: (8) /api/v1/namespaces/proxy-476/pods/proxy-service-gnkhl-tgltj/proxy/: test (200; 9.850736ms)
Jan 26 22:48:16.233: INFO: (8) /api/v1/namespaces/proxy-476/services/https:proxy-service-gnkhl:tlsportname2/proxy/: tls qux (200; 12.196557ms)
Jan 26 22:48:16.234: INFO: (8) /api/v1/namespaces/proxy-476/pods/proxy-service-gnkhl-tgltj:160/proxy/: foo (200; 11.348807ms)
Jan 26 22:48:16.234: INFO: (8) /api/v1/namespaces/proxy-476/pods/proxy-service-gnkhl-tgltj:1080/proxy/: testt... (200; 10.924267ms)
Jan 26 22:48:16.246: INFO: (9) /api/v1/namespaces/proxy-476/pods/proxy-service-gnkhl-tgltj/proxy/: test (200; 11.178562ms)
Jan 26 22:48:16.246: INFO: (9) /api/v1/namespaces/proxy-476/pods/proxy-service-gnkhl-tgltj:160/proxy/: foo (200; 11.270859ms)
Jan 26 22:48:16.247: INFO: (9) /api/v1/namespaces/proxy-476/pods/https:proxy-service-gnkhl-tgltj:460/proxy/: tls baz (200; 11.684609ms)
Jan 26 22:48:16.247: INFO: (9) /api/v1/namespaces/proxy-476/pods/http:proxy-service-gnkhl-tgltj:162/proxy/: bar (200; 11.4908ms)
Jan 26 22:48:16.247: INFO: (9) /api/v1/namespaces/proxy-476/pods/https:proxy-service-gnkhl-tgltj:462/proxy/: tls qux (200; 11.755361ms)
Jan 26 22:48:16.247: INFO: (9) /api/v1/namespaces/proxy-476/services/http:proxy-service-gnkhl:portname2/proxy/: bar (200; 11.999057ms)
Jan 26 22:48:16.247: INFO: (9) /api/v1/namespaces/proxy-476/services/http:proxy-service-gnkhl:portname1/proxy/: foo (200; 12.306145ms)
Jan 26 22:48:16.248: INFO: (9) /api/v1/namespaces/proxy-476/services/https:proxy-service-gnkhl:tlsportname2/proxy/: tls qux (200; 12.960559ms)
Jan 26 22:48:16.248: INFO: (9) /api/v1/namespaces/proxy-476/pods/proxy-service-gnkhl-tgltj:1080/proxy/: testt... (200; 10.585736ms)
Jan 26 22:48:16.264: INFO: (10) /api/v1/namespaces/proxy-476/services/http:proxy-service-gnkhl:portname2/proxy/: bar (200; 13.298132ms)
Jan 26 22:48:16.264: INFO: (10) /api/v1/namespaces/proxy-476/services/proxy-service-gnkhl:portname1/proxy/: foo (200; 13.095564ms)
Jan 26 22:48:16.265: INFO: (10) /api/v1/namespaces/proxy-476/services/proxy-service-gnkhl:portname2/proxy/: bar (200; 13.818955ms)
Jan 26 22:48:16.266: INFO: (10) /api/v1/namespaces/proxy-476/pods/proxy-service-gnkhl-tgltj:1080/proxy/: testtest (200; 14.446366ms)
Jan 26 22:48:16.266: INFO: (10) /api/v1/namespaces/proxy-476/services/https:proxy-service-gnkhl:tlsportname2/proxy/: tls qux (200; 14.43006ms)
Jan 26 22:48:16.266: INFO: (10) /api/v1/namespaces/proxy-476/pods/https:proxy-service-gnkhl-tgltj:462/proxy/: tls qux (200; 14.866156ms)
Jan 26 22:48:16.266: INFO: (10) /api/v1/namespaces/proxy-476/pods/https:proxy-service-gnkhl-tgltj:460/proxy/: tls baz (200; 15.012456ms)
Jan 26 22:48:16.266: INFO: (10) /api/v1/namespaces/proxy-476/services/https:proxy-service-gnkhl:tlsportname1/proxy/: tls baz (200; 14.895883ms)
Jan 26 22:48:16.266: INFO: (10) /api/v1/namespaces/proxy-476/pods/https:proxy-service-gnkhl-tgltj:443/proxy/: testtest (200; 14.267529ms)
Jan 26 22:48:16.281: INFO: (11) /api/v1/namespaces/proxy-476/pods/http:proxy-service-gnkhl-tgltj:1080/proxy/: t... (200; 14.373874ms)
Jan 26 22:48:16.281: INFO: (11) /api/v1/namespaces/proxy-476/pods/http:proxy-service-gnkhl-tgltj:160/proxy/: foo (200; 14.400073ms)
Jan 26 22:48:16.281: INFO: (11) /api/v1/namespaces/proxy-476/services/proxy-service-gnkhl:portname1/proxy/: foo (200; 14.441994ms)
Jan 26 22:48:16.281: INFO: (11) /api/v1/namespaces/proxy-476/services/https:proxy-service-gnkhl:tlsportname2/proxy/: tls qux (200; 14.293695ms)
Jan 26 22:48:16.282: INFO: (11) /api/v1/namespaces/proxy-476/services/http:proxy-service-gnkhl:portname2/proxy/: bar (200; 14.52979ms)
Jan 26 22:48:16.283: INFO: (11) /api/v1/namespaces/proxy-476/pods/proxy-service-gnkhl-tgltj:160/proxy/: foo (200; 15.594255ms)
Jan 26 22:48:16.283: INFO: (11) /api/v1/namespaces/proxy-476/services/http:proxy-service-gnkhl:portname1/proxy/: foo (200; 15.774873ms)
Jan 26 22:48:16.283: INFO: (11) /api/v1/namespaces/proxy-476/services/proxy-service-gnkhl:portname2/proxy/: bar (200; 15.77149ms)
Jan 26 22:48:16.283: INFO: (11) /api/v1/namespaces/proxy-476/pods/proxy-service-gnkhl-tgltj:162/proxy/: bar (200; 15.962182ms)
Jan 26 22:48:16.294: INFO: (12) /api/v1/namespaces/proxy-476/pods/proxy-service-gnkhl-tgltj/proxy/: test (200; 10.448542ms)
Jan 26 22:48:16.294: INFO: (12) /api/v1/namespaces/proxy-476/pods/https:proxy-service-gnkhl-tgltj:443/proxy/: t... (200; 16.420655ms)
Jan 26 22:48:16.300: INFO: (12) /api/v1/namespaces/proxy-476/services/https:proxy-service-gnkhl:tlsportname1/proxy/: tls baz (200; 16.513209ms)
Jan 26 22:48:16.300: INFO: (12) /api/v1/namespaces/proxy-476/pods/https:proxy-service-gnkhl-tgltj:460/proxy/: tls baz (200; 16.407942ms)
Jan 26 22:48:16.300: INFO: (12) /api/v1/namespaces/proxy-476/pods/proxy-service-gnkhl-tgltj:160/proxy/: foo (200; 16.430519ms)
Jan 26 22:48:16.300: INFO: (12) /api/v1/namespaces/proxy-476/pods/http:proxy-service-gnkhl-tgltj:160/proxy/: foo (200; 16.400558ms)
Jan 26 22:48:16.300: INFO: (12) /api/v1/namespaces/proxy-476/services/http:proxy-service-gnkhl:portname2/proxy/: bar (200; 16.695274ms)
Jan 26 22:48:16.300: INFO: (12) /api/v1/namespaces/proxy-476/pods/proxy-service-gnkhl-tgltj:1080/proxy/: testtestt... (200; 15.43679ms)
Jan 26 22:48:16.317: INFO: (13) /api/v1/namespaces/proxy-476/pods/proxy-service-gnkhl-tgltj:160/proxy/: foo (200; 15.600503ms)
Jan 26 22:48:16.317: INFO: (13) /api/v1/namespaces/proxy-476/pods/https:proxy-service-gnkhl-tgltj:462/proxy/: tls qux (200; 15.19905ms)
Jan 26 22:48:16.317: INFO: (13) /api/v1/namespaces/proxy-476/pods/proxy-service-gnkhl-tgltj/proxy/: test (200; 15.493808ms)
Jan 26 22:48:16.317: INFO: (13) /api/v1/namespaces/proxy-476/pods/http:proxy-service-gnkhl-tgltj:160/proxy/: foo (200; 15.28147ms)
Jan 26 22:48:16.317: INFO: (13) /api/v1/namespaces/proxy-476/services/https:proxy-service-gnkhl:tlsportname1/proxy/: tls baz (200; 16.031825ms)
Jan 26 22:48:16.318: INFO: (13) /api/v1/namespaces/proxy-476/services/http:proxy-service-gnkhl:portname1/proxy/: foo (200; 17.687011ms)
Jan 26 22:48:16.319: INFO: (13) /api/v1/namespaces/proxy-476/services/proxy-service-gnkhl:portname1/proxy/: foo (200; 17.325132ms)
Jan 26 22:48:16.319: INFO: (13) /api/v1/namespaces/proxy-476/services/http:proxy-service-gnkhl:portname2/proxy/: bar (200; 17.126012ms)
Jan 26 22:48:16.319: INFO: (13) /api/v1/namespaces/proxy-476/services/proxy-service-gnkhl:portname2/proxy/: bar (200; 17.207797ms)
Jan 26 22:48:16.319: INFO: (13) /api/v1/namespaces/proxy-476/services/https:proxy-service-gnkhl:tlsportname2/proxy/: tls qux (200; 17.493713ms)
Jan 26 22:48:16.330: INFO: (14) /api/v1/namespaces/proxy-476/pods/http:proxy-service-gnkhl-tgltj:160/proxy/: foo (200; 10.473132ms)
Jan 26 22:48:16.330: INFO: (14) /api/v1/namespaces/proxy-476/services/http:proxy-service-gnkhl:portname2/proxy/: bar (200; 10.590991ms)
Jan 26 22:48:16.331: INFO: (14) /api/v1/namespaces/proxy-476/services/proxy-service-gnkhl:portname1/proxy/: foo (200; 11.613749ms)
Jan 26 22:48:16.331: INFO: (14) /api/v1/namespaces/proxy-476/services/http:proxy-service-gnkhl:portname1/proxy/: foo (200; 11.722889ms)
Jan 26 22:48:16.334: INFO: (14) /api/v1/namespaces/proxy-476/pods/http:proxy-service-gnkhl-tgltj:162/proxy/: bar (200; 14.873466ms)
Jan 26 22:48:16.334: INFO: (14) /api/v1/namespaces/proxy-476/pods/proxy-service-gnkhl-tgltj:1080/proxy/: testt... (200; 15.027583ms)
Jan 26 22:48:16.334: INFO: (14) /api/v1/namespaces/proxy-476/pods/https:proxy-service-gnkhl-tgltj:443/proxy/: test (200; 15.460553ms)
Jan 26 22:48:16.334: INFO: (14) /api/v1/namespaces/proxy-476/pods/proxy-service-gnkhl-tgltj:160/proxy/: foo (200; 15.125052ms)
Jan 26 22:48:16.334: INFO: (14) /api/v1/namespaces/proxy-476/pods/https:proxy-service-gnkhl-tgltj:462/proxy/: tls qux (200; 15.171809ms)
Jan 26 22:48:16.334: INFO: (14) /api/v1/namespaces/proxy-476/services/https:proxy-service-gnkhl:tlsportname1/proxy/: tls baz (200; 15.502197ms)
Jan 26 22:48:16.334: INFO: (14) /api/v1/namespaces/proxy-476/pods/https:proxy-service-gnkhl-tgltj:460/proxy/: tls baz (200; 15.272118ms)
Jan 26 22:48:16.334: INFO: (14) /api/v1/namespaces/proxy-476/services/https:proxy-service-gnkhl:tlsportname2/proxy/: tls qux (200; 15.232845ms)
Jan 26 22:48:16.335: INFO: (14) /api/v1/namespaces/proxy-476/services/proxy-service-gnkhl:portname2/proxy/: bar (200; 15.825682ms)
Jan 26 22:48:16.342: INFO: (15) /api/v1/namespaces/proxy-476/pods/proxy-service-gnkhl-tgltj:160/proxy/: foo (200; 6.748904ms)
Jan 26 22:48:16.342: INFO: (15) /api/v1/namespaces/proxy-476/pods/http:proxy-service-gnkhl-tgltj:162/proxy/: bar (200; 6.866751ms)
Jan 26 22:48:16.342: INFO: (15) /api/v1/namespaces/proxy-476/pods/http:proxy-service-gnkhl-tgltj:160/proxy/: foo (200; 6.784873ms)
Jan 26 22:48:16.342: INFO: (15) /api/v1/namespaces/proxy-476/pods/proxy-service-gnkhl-tgltj:1080/proxy/: testt... (200; 7.229716ms)
Jan 26 22:48:16.343: INFO: (15) /api/v1/namespaces/proxy-476/pods/https:proxy-service-gnkhl-tgltj:462/proxy/: tls qux (200; 7.463598ms)
Jan 26 22:48:16.343: INFO: (15) /api/v1/namespaces/proxy-476/pods/proxy-service-gnkhl-tgltj/proxy/: test (200; 7.595236ms)
Jan 26 22:48:16.343: INFO: (15) /api/v1/namespaces/proxy-476/pods/proxy-service-gnkhl-tgltj:162/proxy/: bar (200; 8.034518ms)
Jan 26 22:48:16.345: INFO: (15) /api/v1/namespaces/proxy-476/pods/https:proxy-service-gnkhl-tgltj:443/proxy/: test (200; 13.995659ms)
Jan 26 22:48:16.364: INFO: (16) /api/v1/namespaces/proxy-476/pods/https:proxy-service-gnkhl-tgltj:443/proxy/: t... (200; 15.833346ms)
Jan 26 22:48:16.365: INFO: (16) /api/v1/namespaces/proxy-476/pods/proxy-service-gnkhl-tgltj:1080/proxy/: testtesttest (200; 11.779013ms)
Jan 26 22:48:16.378: INFO: (17) /api/v1/namespaces/proxy-476/pods/http:proxy-service-gnkhl-tgltj:160/proxy/: foo (200; 11.970446ms)
Jan 26 22:48:16.378: INFO: (17) /api/v1/namespaces/proxy-476/pods/proxy-service-gnkhl-tgltj:162/proxy/: bar (200; 12.019765ms)
Jan 26 22:48:16.378: INFO: (17) /api/v1/namespaces/proxy-476/pods/http:proxy-service-gnkhl-tgltj:162/proxy/: bar (200; 12.045187ms)
Jan 26 22:48:16.379: INFO: (17) /api/v1/namespaces/proxy-476/pods/https:proxy-service-gnkhl-tgltj:462/proxy/: tls qux (200; 13.048045ms)
Jan 26 22:48:16.380: INFO: (17) /api/v1/namespaces/proxy-476/pods/http:proxy-service-gnkhl-tgltj:1080/proxy/: t... (200; 13.247406ms)
Jan 26 22:48:16.380: INFO: (17) /api/v1/namespaces/proxy-476/pods/proxy-service-gnkhl-tgltj:160/proxy/: foo (200; 13.535202ms)
Jan 26 22:48:16.386: INFO: (18) /api/v1/namespaces/proxy-476/services/http:proxy-service-gnkhl:portname1/proxy/: foo (200; 6.520658ms)
Jan 26 22:48:16.387: INFO: (18) /api/v1/namespaces/proxy-476/pods/proxy-service-gnkhl-tgltj:1080/proxy/: testtest (200; 6.563716ms)
Jan 26 22:48:16.387: INFO: (18) /api/v1/namespaces/proxy-476/pods/http:proxy-service-gnkhl-tgltj:162/proxy/: bar (200; 6.675938ms)
Jan 26 22:48:16.387: INFO: (18) /api/v1/namespaces/proxy-476/services/proxy-service-gnkhl:portname2/proxy/: bar (200; 7.173074ms)
Jan 26 22:48:16.387: INFO: (18) /api/v1/namespaces/proxy-476/pods/proxy-service-gnkhl-tgltj:160/proxy/: foo (200; 7.00637ms)
Jan 26 22:48:16.387: INFO: (18) /api/v1/namespaces/proxy-476/services/https:proxy-service-gnkhl:tlsportname2/proxy/: tls qux (200; 7.424698ms)
Jan 26 22:48:16.389: INFO: (18) /api/v1/namespaces/proxy-476/services/proxy-service-gnkhl:portname1/proxy/: foo (200; 8.969817ms)
Jan 26 22:48:16.390: INFO: (18) /api/v1/namespaces/proxy-476/pods/https:proxy-service-gnkhl-tgltj:462/proxy/: tls qux (200; 9.782343ms)
Jan 26 22:48:16.390: INFO: (18) /api/v1/namespaces/proxy-476/pods/https:proxy-service-gnkhl-tgltj:460/proxy/: tls baz (200; 9.903992ms)
Jan 26 22:48:16.390: INFO: (18) /api/v1/namespaces/proxy-476/services/http:proxy-service-gnkhl:portname2/proxy/: bar (200; 9.800914ms)
Jan 26 22:48:16.390: INFO: (18) /api/v1/namespaces/proxy-476/pods/https:proxy-service-gnkhl-tgltj:443/proxy/: t... (200; 10.010247ms)
Jan 26 22:48:16.390: INFO: (18) /api/v1/namespaces/proxy-476/pods/proxy-service-gnkhl-tgltj:162/proxy/: bar (200; 10.122034ms)
Jan 26 22:48:16.390: INFO: (18) /api/v1/namespaces/proxy-476/pods/http:proxy-service-gnkhl-tgltj:160/proxy/: foo (200; 10.264732ms)
Jan 26 22:48:16.391: INFO: (18) /api/v1/namespaces/proxy-476/services/https:proxy-service-gnkhl:tlsportname1/proxy/: tls baz (200; 11.685303ms)
Jan 26 22:48:16.397: INFO: (19) /api/v1/namespaces/proxy-476/pods/http:proxy-service-gnkhl-tgltj:160/proxy/: foo (200; 4.944876ms)
Jan 26 22:48:16.397: INFO: (19) /api/v1/namespaces/proxy-476/pods/https:proxy-service-gnkhl-tgltj:460/proxy/: tls baz (200; 5.005522ms)
Jan 26 22:48:16.401: INFO: (19) /api/v1/namespaces/proxy-476/pods/proxy-service-gnkhl-tgltj:160/proxy/: foo (200; 8.527625ms)
Jan 26 22:48:16.401: INFO: (19) /api/v1/namespaces/proxy-476/services/https:proxy-service-gnkhl:tlsportname1/proxy/: tls baz (200; 9.724199ms)
Jan 26 22:48:16.401: INFO: (19) /api/v1/namespaces/proxy-476/pods/proxy-service-gnkhl-tgltj:1080/proxy/: testt... (200; 10.338249ms)
Jan 26 22:48:16.403: INFO: (19) /api/v1/namespaces/proxy-476/pods/http:proxy-service-gnkhl-tgltj:162/proxy/: bar (200; 10.325041ms)
Jan 26 22:48:16.405: INFO: (19) /api/v1/namespaces/proxy-476/pods/proxy-service-gnkhl-tgltj:162/proxy/: bar (200; 12.636911ms)
Jan 26 22:48:16.405: INFO: (19) /api/v1/namespaces/proxy-476/pods/https:proxy-service-gnkhl-tgltj:462/proxy/: tls qux (200; 12.340364ms)
Jan 26 22:48:16.406: INFO: (19) /api/v1/namespaces/proxy-476/services/proxy-service-gnkhl:portname2/proxy/: bar (200; 14.000551ms)
Jan 26 22:48:16.406: INFO: (19) /api/v1/namespaces/proxy-476/services/http:proxy-service-gnkhl:portname1/proxy/: foo (200; 13.18007ms)
Jan 26 22:48:16.406: INFO: (19) /api/v1/namespaces/proxy-476/services/proxy-service-gnkhl:portname1/proxy/: foo (200; 14.2056ms)
Jan 26 22:48:16.406: INFO: (19) /api/v1/namespaces/proxy-476/pods/proxy-service-gnkhl-tgltj/proxy/: test (200; 14.082603ms)
Jan 26 22:48:16.406: INFO: (19) /api/v1/namespaces/proxy-476/services/https:proxy-service-gnkhl:tlsportname2/proxy/: tls qux (200; 14.490876ms)
Jan 26 22:48:16.406: INFO: (19) /api/v1/namespaces/proxy-476/services/http:proxy-service-gnkhl:portname2/proxy/: bar (200; 14.181809ms)
STEP: deleting ReplicationController proxy-service-gnkhl in namespace proxy-476, will wait for the garbage collector to delete the pods
Jan 26 22:48:16.466: INFO: Deleting ReplicationController proxy-service-gnkhl took: 6.526889ms
Jan 26 22:48:16.766: INFO: Terminating ReplicationController proxy-service-gnkhl pods took: 300.382996ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:48:32.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-476" for this suite.

• [SLOW TEST:28.712 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":278,"completed":256,"skipped":4300,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:48:32.375: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 26 22:48:33.024: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 26 22:48:35.040: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675713, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675713, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675713, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675713, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 22:48:37.049: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675713, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675713, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675713, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675713, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 22:48:39.046: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675713, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675713, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675713, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715675713, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 26 22:48:42.111: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
STEP: create a pod that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:48:42.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5225" for this suite.
STEP: Destroying namespace "webhook-5225-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.104 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":257,"skipped":4311,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:48:42.480: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jan 26 22:48:55.903: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:48:55.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-6188" for this suite.

• [SLOW TEST:13.555 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":258,"skipped":4313,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:48:56.036: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-2603
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Jan 26 22:48:56.336: INFO: Found 0 stateful pods, waiting for 3
Jan 26 22:49:06.346: INFO: Found 1 stateful pods, waiting for 3
Jan 26 22:49:16.347: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 26 22:49:16.347: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 26 22:49:16.347: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 26 22:49:26.355: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 26 22:49:26.355: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 26 22:49:26.355: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Jan 26 22:49:26.515: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jan 26 22:49:36.633: INFO: Updating stateful set ss2
Jan 26 22:49:36.751: INFO: Waiting for Pod statefulset-2603/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 26 22:49:46.767: INFO: Waiting for Pod statefulset-2603/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Jan 26 22:49:57.168: INFO: Found 2 stateful pods, waiting for 3
Jan 26 22:50:07.174: INFO: Found 2 stateful pods, waiting for 3
Jan 26 22:50:17.176: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 26 22:50:17.176: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 26 22:50:17.176: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jan 26 22:50:17.202: INFO: Updating stateful set ss2
Jan 26 22:50:17.330: INFO: Waiting for Pod statefulset-2603/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 26 22:50:27.379: INFO: Updating stateful set ss2
Jan 26 22:50:27.404: INFO: Waiting for StatefulSet statefulset-2603/ss2 to complete update
Jan 26 22:50:27.404: INFO: Waiting for Pod statefulset-2603/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 26 22:50:37.431: INFO: Waiting for StatefulSet statefulset-2603/ss2 to complete update
Jan 26 22:50:37.431: INFO: Waiting for Pod statefulset-2603/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 26 22:50:47.436: INFO: Waiting for StatefulSet statefulset-2603/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jan 26 22:50:57.421: INFO: Deleting all statefulset in ns statefulset-2603
Jan 26 22:50:57.425: INFO: Scaling statefulset ss2 to 0
Jan 26 22:51:38.408: INFO: Waiting for statefulset status.replicas updated to 0
Jan 26 22:51:38.412: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:51:38.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2603" for this suite.

• [SLOW TEST:162.422 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":259,"skipped":4339,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:51:38.459: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-downwardapi-s446
STEP: Creating a pod to test atomic-volume-subpath
Jan 26 22:51:38.597: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-s446" in namespace "subpath-5662" to be "success or failure"
Jan 26 22:51:38.622: INFO: Pod "pod-subpath-test-downwardapi-s446": Phase="Pending", Reason="", readiness=false. Elapsed: 25.103333ms
Jan 26 22:51:40.633: INFO: Pod "pod-subpath-test-downwardapi-s446": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035677572s
Jan 26 22:51:42.638: INFO: Pod "pod-subpath-test-downwardapi-s446": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041684037s
Jan 26 22:51:44.648: INFO: Pod "pod-subpath-test-downwardapi-s446": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051109085s
Jan 26 22:51:46.659: INFO: Pod "pod-subpath-test-downwardapi-s446": Phase="Running", Reason="", readiness=true. Elapsed: 8.062264382s
Jan 26 22:51:48.674: INFO: Pod "pod-subpath-test-downwardapi-s446": Phase="Running", Reason="", readiness=true. Elapsed: 10.076921625s
Jan 26 22:51:50.680: INFO: Pod "pod-subpath-test-downwardapi-s446": Phase="Running", Reason="", readiness=true. Elapsed: 12.082779256s
Jan 26 22:51:52.687: INFO: Pod "pod-subpath-test-downwardapi-s446": Phase="Running", Reason="", readiness=true. Elapsed: 14.090338834s
Jan 26 22:51:54.693: INFO: Pod "pod-subpath-test-downwardapi-s446": Phase="Running", Reason="", readiness=true. Elapsed: 16.096597658s
Jan 26 22:51:56.699: INFO: Pod "pod-subpath-test-downwardapi-s446": Phase="Running", Reason="", readiness=true. Elapsed: 18.102628018s
Jan 26 22:51:58.706: INFO: Pod "pod-subpath-test-downwardapi-s446": Phase="Running", Reason="", readiness=true. Elapsed: 20.109004544s
Jan 26 22:52:00.715: INFO: Pod "pod-subpath-test-downwardapi-s446": Phase="Running", Reason="", readiness=true. Elapsed: 22.117866893s
Jan 26 22:52:02.722: INFO: Pod "pod-subpath-test-downwardapi-s446": Phase="Running", Reason="", readiness=true. Elapsed: 24.124911343s
Jan 26 22:52:04.728: INFO: Pod "pod-subpath-test-downwardapi-s446": Phase="Running", Reason="", readiness=true. Elapsed: 26.131244164s
Jan 26 22:52:06.934: INFO: Pod "pod-subpath-test-downwardapi-s446": Phase="Running", Reason="", readiness=true. Elapsed: 28.336965307s
Jan 26 22:52:08.940: INFO: Pod "pod-subpath-test-downwardapi-s446": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.343549167s
STEP: Saw pod success
Jan 26 22:52:08.940: INFO: Pod "pod-subpath-test-downwardapi-s446" satisfied condition "success or failure"
Jan 26 22:52:08.958: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-downwardapi-s446 container test-container-subpath-downwardapi-s446: 
STEP: delete the pod
Jan 26 22:52:09.090: INFO: Waiting for pod pod-subpath-test-downwardapi-s446 to disappear
Jan 26 22:52:09.101: INFO: Pod pod-subpath-test-downwardapi-s446 no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-s446
Jan 26 22:52:09.101: INFO: Deleting pod "pod-subpath-test-downwardapi-s446" in namespace "subpath-5662"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:52:09.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5662" for this suite.

• [SLOW TEST:30.664 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":260,"skipped":4339,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:52:09.124: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 26 22:52:16.525: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:52:16.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9186" for this suite.

• [SLOW TEST:7.471 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":261,"skipped":4348,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:52:16.596: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name s-test-opt-del-fe03bb56-874d-48e7-b99a-e6be90f6f3bc
STEP: Creating secret with name s-test-opt-upd-c3730f54-c583-43e5-979e-5e225b383273
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-fe03bb56-874d-48e7-b99a-e6be90f6f3bc
STEP: Updating secret s-test-opt-upd-c3730f54-c583-43e5-979e-5e225b383273
STEP: Creating secret with name s-test-opt-create-21eb8770-712d-4b78-8ba7-a23e64e91ee7
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:53:35.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1605" for this suite.

• [SLOW TEST:79.321 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":262,"skipped":4355,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:53:35.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 26 22:53:36.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2578'
Jan 26 22:53:36.410: INFO: stderr: ""
Jan 26 22:53:36.410: INFO: stdout: "replicationcontroller/agnhost-master created\n"
Jan 26 22:53:36.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2578'
Jan 26 22:53:37.086: INFO: stderr: ""
Jan 26 22:53:37.087: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Jan 26 22:53:38.094: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 26 22:53:38.094: INFO: Found 0 / 1
Jan 26 22:53:39.096: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 26 22:53:39.096: INFO: Found 0 / 1
Jan 26 22:53:40.093: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 26 22:53:40.093: INFO: Found 0 / 1
Jan 26 22:53:41.100: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 26 22:53:41.100: INFO: Found 0 / 1
Jan 26 22:53:42.095: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 26 22:53:42.095: INFO: Found 0 / 1
Jan 26 22:53:43.092: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 26 22:53:43.092: INFO: Found 0 / 1
Jan 26 22:53:44.099: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 26 22:53:44.099: INFO: Found 0 / 1
Jan 26 22:53:45.096: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 26 22:53:45.096: INFO: Found 0 / 1
Jan 26 22:53:46.093: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 26 22:53:46.093: INFO: Found 1 / 1
Jan 26 22:53:46.093: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 26 22:53:46.097: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 26 22:53:46.097: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 26 22:53:46.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-trxf6 --namespace=kubectl-2578'
Jan 26 22:53:46.354: INFO: stderr: ""
Jan 26 22:53:46.354: INFO: stdout: "Name:         agnhost-master-trxf6\nNamespace:    kubectl-2578\nPriority:     0\nNode:         jerma-node/10.96.2.250\nStart Time:   Sun, 26 Jan 2020 22:53:36 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.44.0.2\nIPs:\n  IP:           10.44.0.2\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   docker://b3e5c577c90dfb020f2dbfc0ea44487d964ff3d19ad84840639c2ee7775b55a0\n    Image:          gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Sun, 26 Jan 2020 22:53:43 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-82xbf (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-82xbf:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-82xbf\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age        From                 Message\n  ----    ------     ----       ----                 -------\n  Normal  Scheduled    default-scheduler    Successfully assigned kubectl-2578/agnhost-master-trxf6 to jerma-node\n  Normal  Pulled     7s         kubelet, jerma-node  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n  Normal  Created    4s         kubelet, jerma-node  Created container agnhost-master\n  Normal  Started    3s         kubelet, jerma-node  Started container agnhost-master\n"
Jan 26 22:53:46.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-2578'
Jan 26 22:53:46.606: INFO: stderr: ""
Jan 26 22:53:46.606: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-2578\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  10s   replication-controller  Created pod: agnhost-master-trxf6\n"
Jan 26 22:53:46.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-2578'
Jan 26 22:53:46.733: INFO: stderr: ""
Jan 26 22:53:46.733: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-2578\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.96.182.38\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.44.0.2:6379\nSession Affinity:  None\nEvents:            \n"
Jan 26 22:53:46.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-node'
Jan 26 22:53:46.944: INFO: stderr: ""
Jan 26 22:53:46.944: INFO: stdout: "Name:               jerma-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=jerma-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 04 Jan 2020 11:59:52 +0000\nTaints:             \nUnschedulable:      false\nLease:\n  HolderIdentity:  jerma-node\n  AcquireTime:     \n  RenewTime:       Sun, 26 Jan 2020 22:53:43 +0000\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 04 Jan 2020 12:00:49 +0000   Sat, 04 Jan 2020 12:00:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Sun, 26 Jan 2020 22:52:16 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Sun, 26 Jan 2020 22:52:16 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Sun, 26 Jan 2020 22:52:16 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Sun, 26 Jan 2020 22:52:16 +0000   Sat, 04 Jan 2020 12:00:52 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.2.250\n  Hostname:    jerma-node\nCapacity:\n  cpu:                4\n  ephemeral-storage:  20145724Ki\n  hugepages-2Mi:      0\n  memory:             4039076Ki\n  pods:               110\nAllocatable:\n  cpu:                4\n  ephemeral-storage:  18566299208\n  hugepages-2Mi:      0\n  memory:             3936676Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 bdc16344252549dd902c3a5d68b22f41\n  System UUID:                BDC16344-2525-49DD-902C-3A5D68B22F41\n  Boot ID:                    eec61fc4-8bf6-487f-8f93-ea9731fe757a\n  Kernel Version:             4.15.0-52-generic\n  OS Image:                   Ubuntu 18.04.2 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  docker://18.9.7\n  Kubelet Version:            v1.17.0\n  Kube-Proxy Version:         v1.17.0\nNon-terminated Pods:          (3 in total)\n  Namespace                   Name                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                    ------------  ----------  ---------------  -------------  ---\n  kube-system                 kube-proxy-dsf66        0 (0%)        0 (0%)      0 (0%)           0 (0%)         22d\n  kube-system                 weave-net-kz8lv         20m (0%)      0 (0%)      0 (0%)           0 (0%)         22d\n  kubectl-2578                agnhost-master-trxf6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Jan 26 22:53:46.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-2578'
Jan 26 22:53:47.108: INFO: stderr: ""
Jan 26 22:53:47.108: INFO: stdout: "Name:         kubectl-2578\nLabels:       e2e-framework=kubectl\n              e2e-run=8b6cb1df-2424-43ca-8fbc-531fea4666d9\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:53:47.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2578" for this suite.

• [SLOW TEST:11.201 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":278,"completed":263,"skipped":4368,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:53:47.119: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Starting the proxy
Jan 26 22:53:47.252: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix273144288/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:53:47.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9720" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":278,"completed":264,"skipped":4374,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:53:47.358: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 26 22:53:47.445: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c2fa32da-6d90-4ed8-b675-77e0eace0ab6" in namespace "projected-5448" to be "success or failure"
Jan 26 22:53:47.541: INFO: Pod "downwardapi-volume-c2fa32da-6d90-4ed8-b675-77e0eace0ab6": Phase="Pending", Reason="", readiness=false. Elapsed: 96.398605ms
Jan 26 22:53:49.549: INFO: Pod "downwardapi-volume-c2fa32da-6d90-4ed8-b675-77e0eace0ab6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103999171s
Jan 26 22:53:51.556: INFO: Pod "downwardapi-volume-c2fa32da-6d90-4ed8-b675-77e0eace0ab6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110897809s
Jan 26 22:53:53.564: INFO: Pod "downwardapi-volume-c2fa32da-6d90-4ed8-b675-77e0eace0ab6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119031785s
Jan 26 22:53:55.573: INFO: Pod "downwardapi-volume-c2fa32da-6d90-4ed8-b675-77e0eace0ab6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.128554522s
Jan 26 22:53:57.585: INFO: Pod "downwardapi-volume-c2fa32da-6d90-4ed8-b675-77e0eace0ab6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.14015882s
Jan 26 22:53:59.597: INFO: Pod "downwardapi-volume-c2fa32da-6d90-4ed8-b675-77e0eace0ab6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.152045413s
STEP: Saw pod success
Jan 26 22:53:59.597: INFO: Pod "downwardapi-volume-c2fa32da-6d90-4ed8-b675-77e0eace0ab6" satisfied condition "success or failure"
Jan 26 22:53:59.602: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-c2fa32da-6d90-4ed8-b675-77e0eace0ab6 container client-container: 
STEP: delete the pod
Jan 26 22:53:59.723: INFO: Waiting for pod downwardapi-volume-c2fa32da-6d90-4ed8-b675-77e0eace0ab6 to disappear
Jan 26 22:53:59.744: INFO: Pod downwardapi-volume-c2fa32da-6d90-4ed8-b675-77e0eace0ab6 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:53:59.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5448" for this suite.

• [SLOW TEST:12.407 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":265,"skipped":4389,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:53:59.767: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 26 22:53:59.896: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7ed8519b-3b00-4d19-9c00-8a23821a9a84" in namespace "downward-api-7515" to be "success or failure"
Jan 26 22:53:59.908: INFO: Pod "downwardapi-volume-7ed8519b-3b00-4d19-9c00-8a23821a9a84": Phase="Pending", Reason="", readiness=false. Elapsed: 11.720373ms
Jan 26 22:54:01.916: INFO: Pod "downwardapi-volume-7ed8519b-3b00-4d19-9c00-8a23821a9a84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020126107s
Jan 26 22:54:03.959: INFO: Pod "downwardapi-volume-7ed8519b-3b00-4d19-9c00-8a23821a9a84": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063228283s
Jan 26 22:54:05.968: INFO: Pod "downwardapi-volume-7ed8519b-3b00-4d19-9c00-8a23821a9a84": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072328072s
Jan 26 22:54:07.979: INFO: Pod "downwardapi-volume-7ed8519b-3b00-4d19-9c00-8a23821a9a84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.082442447s
STEP: Saw pod success
Jan 26 22:54:07.979: INFO: Pod "downwardapi-volume-7ed8519b-3b00-4d19-9c00-8a23821a9a84" satisfied condition "success or failure"
Jan 26 22:54:07.985: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-7ed8519b-3b00-4d19-9c00-8a23821a9a84 container client-container: 
STEP: delete the pod
Jan 26 22:54:08.150: INFO: Waiting for pod downwardapi-volume-7ed8519b-3b00-4d19-9c00-8a23821a9a84 to disappear
Jan 26 22:54:08.170: INFO: Pod downwardapi-volume-7ed8519b-3b00-4d19-9c00-8a23821a9a84 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:54:08.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7515" for this suite.

• [SLOW TEST:8.418 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":266,"skipped":4419,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:54:08.186: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 26 22:54:08.402: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c2570acc-8dcc-4d39-a9a4-c70bbcbf85af" in namespace "projected-5427" to be "success or failure"
Jan 26 22:54:08.417: INFO: Pod "downwardapi-volume-c2570acc-8dcc-4d39-a9a4-c70bbcbf85af": Phase="Pending", Reason="", readiness=false. Elapsed: 14.561918ms
Jan 26 22:54:10.427: INFO: Pod "downwardapi-volume-c2570acc-8dcc-4d39-a9a4-c70bbcbf85af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024468538s
Jan 26 22:54:12.437: INFO: Pod "downwardapi-volume-c2570acc-8dcc-4d39-a9a4-c70bbcbf85af": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034317368s
Jan 26 22:54:14.442: INFO: Pod "downwardapi-volume-c2570acc-8dcc-4d39-a9a4-c70bbcbf85af": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039569151s
Jan 26 22:54:16.454: INFO: Pod "downwardapi-volume-c2570acc-8dcc-4d39-a9a4-c70bbcbf85af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051767109s
STEP: Saw pod success
Jan 26 22:54:16.454: INFO: Pod "downwardapi-volume-c2570acc-8dcc-4d39-a9a4-c70bbcbf85af" satisfied condition "success or failure"
Jan 26 22:54:16.460: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-c2570acc-8dcc-4d39-a9a4-c70bbcbf85af container client-container: 
STEP: delete the pod
Jan 26 22:54:16.867: INFO: Waiting for pod downwardapi-volume-c2570acc-8dcc-4d39-a9a4-c70bbcbf85af to disappear
Jan 26 22:54:16.878: INFO: Pod downwardapi-volume-c2570acc-8dcc-4d39-a9a4-c70bbcbf85af no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:54:16.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5427" for this suite.

• [SLOW TEST:8.815 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":267,"skipped":4428,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:54:17.002: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service multi-endpoint-test in namespace services-2350
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2350 to expose endpoints map[]
Jan 26 22:54:17.339: INFO: Get endpoints failed (35.009728ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Jan 26 22:54:18.347: INFO: successfully validated that service multi-endpoint-test in namespace services-2350 exposes endpoints map[] (1.042641708s elapsed)
STEP: Creating pod pod1 in namespace services-2350
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2350 to expose endpoints map[pod1:[100]]
Jan 26 22:54:22.470: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.103451628s elapsed, will retry)
Jan 26 22:54:25.551: INFO: successfully validated that service multi-endpoint-test in namespace services-2350 exposes endpoints map[pod1:[100]] (7.183724751s elapsed)
STEP: Creating pod pod2 in namespace services-2350
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2350 to expose endpoints map[pod1:[100] pod2:[101]]
Jan 26 22:54:29.782: INFO: Unexpected endpoints: found map[95bdd679-2720-4862-8bef-0955b47276ca:[100]], expected map[pod1:[100] pod2:[101]] (4.210326115s elapsed, will retry)
Jan 26 22:54:31.850: INFO: successfully validated that service multi-endpoint-test in namespace services-2350 exposes endpoints map[pod1:[100] pod2:[101]] (6.277958141s elapsed)
STEP: Deleting pod pod1 in namespace services-2350
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2350 to expose endpoints map[pod2:[101]]
Jan 26 22:54:32.925: INFO: successfully validated that service multi-endpoint-test in namespace services-2350 exposes endpoints map[pod2:[101]] (1.068849095s elapsed)
STEP: Deleting pod pod2 in namespace services-2350
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2350 to expose endpoints map[]
Jan 26 22:54:33.952: INFO: successfully validated that service multi-endpoint-test in namespace services-2350 exposes endpoints map[] (1.021234336s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:54:34.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2350" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:18.285 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":278,"completed":268,"skipped":4460,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:54:35.288: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-b95a5ac0-d55a-4360-ab81-02324fa5dead in namespace container-probe-7251
Jan 26 22:54:43.496: INFO: Started pod liveness-b95a5ac0-d55a-4360-ab81-02324fa5dead in namespace container-probe-7251
STEP: checking the pod's current state and verifying that restartCount is present
Jan 26 22:54:43.500: INFO: Initial restart count of pod liveness-b95a5ac0-d55a-4360-ab81-02324fa5dead is 0
Jan 26 22:55:01.573: INFO: Restart count of pod container-probe-7251/liveness-b95a5ac0-d55a-4360-ab81-02324fa5dead is now 1 (18.072736364s elapsed)
Jan 26 22:55:21.662: INFO: Restart count of pod container-probe-7251/liveness-b95a5ac0-d55a-4360-ab81-02324fa5dead is now 2 (38.162164267s elapsed)
Jan 26 22:55:41.830: INFO: Restart count of pod container-probe-7251/liveness-b95a5ac0-d55a-4360-ab81-02324fa5dead is now 3 (58.330274184s elapsed)
Jan 26 22:56:01.966: INFO: Restart count of pod container-probe-7251/liveness-b95a5ac0-d55a-4360-ab81-02324fa5dead is now 4 (1m18.466030184s elapsed)
Jan 26 22:57:02.282: INFO: Restart count of pod container-probe-7251/liveness-b95a5ac0-d55a-4360-ab81-02324fa5dead is now 5 (2m18.782185083s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:57:02.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7251" for this suite.

• [SLOW TEST:147.054 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":269,"skipped":4469,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
S
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:57:02.343: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating server pod server in namespace prestop-8496
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-8496
STEP: Deleting pre-stop pod
Jan 26 22:57:25.615: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:57:25.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-8496" for this suite.

• [SLOW TEST:23.323 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":278,"completed":270,"skipped":4470,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:57:25.667: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 26 22:57:25.845: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2ebf50ff-ead3-48e6-80fa-124a94c6ddd9" in namespace "downward-api-2174" to be "success or failure"
Jan 26 22:57:25.992: INFO: Pod "downwardapi-volume-2ebf50ff-ead3-48e6-80fa-124a94c6ddd9": Phase="Pending", Reason="", readiness=false. Elapsed: 146.669714ms
Jan 26 22:57:28.003: INFO: Pod "downwardapi-volume-2ebf50ff-ead3-48e6-80fa-124a94c6ddd9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.157827031s
Jan 26 22:57:30.008: INFO: Pod "downwardapi-volume-2ebf50ff-ead3-48e6-80fa-124a94c6ddd9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.162873427s
Jan 26 22:57:32.015: INFO: Pod "downwardapi-volume-2ebf50ff-ead3-48e6-80fa-124a94c6ddd9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.169106353s
Jan 26 22:57:34.024: INFO: Pod "downwardapi-volume-2ebf50ff-ead3-48e6-80fa-124a94c6ddd9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.178193901s
Jan 26 22:57:36.030: INFO: Pod "downwardapi-volume-2ebf50ff-ead3-48e6-80fa-124a94c6ddd9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.184117783s
STEP: Saw pod success
Jan 26 22:57:36.030: INFO: Pod "downwardapi-volume-2ebf50ff-ead3-48e6-80fa-124a94c6ddd9" satisfied condition "success or failure"
Jan 26 22:57:36.033: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-2ebf50ff-ead3-48e6-80fa-124a94c6ddd9 container client-container: 
STEP: delete the pod
Jan 26 22:57:36.350: INFO: Waiting for pod downwardapi-volume-2ebf50ff-ead3-48e6-80fa-124a94c6ddd9 to disappear
Jan 26 22:57:36.445: INFO: Pod downwardapi-volume-2ebf50ff-ead3-48e6-80fa-124a94c6ddd9 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:57:36.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2174" for this suite.

• [SLOW TEST:10.808 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":271,"skipped":4470,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:57:36.476: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's command
Jan 26 22:57:36.631: INFO: Waiting up to 5m0s for pod "var-expansion-405850b2-6aae-487c-a12f-28e23331bb06" in namespace "var-expansion-3677" to be "success or failure"
Jan 26 22:57:36.677: INFO: Pod "var-expansion-405850b2-6aae-487c-a12f-28e23331bb06": Phase="Pending", Reason="", readiness=false. Elapsed: 45.178816ms
Jan 26 22:57:38.685: INFO: Pod "var-expansion-405850b2-6aae-487c-a12f-28e23331bb06": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053068769s
Jan 26 22:57:40.695: INFO: Pod "var-expansion-405850b2-6aae-487c-a12f-28e23331bb06": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063664439s
Jan 26 22:57:42.703: INFO: Pod "var-expansion-405850b2-6aae-487c-a12f-28e23331bb06": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07171433s
Jan 26 22:57:44.714: INFO: Pod "var-expansion-405850b2-6aae-487c-a12f-28e23331bb06": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.082891873s
STEP: Saw pod success
Jan 26 22:57:44.715: INFO: Pod "var-expansion-405850b2-6aae-487c-a12f-28e23331bb06" satisfied condition "success or failure"
Jan 26 22:57:44.718: INFO: Trying to get logs from node jerma-node pod var-expansion-405850b2-6aae-487c-a12f-28e23331bb06 container dapi-container: 
STEP: delete the pod
Jan 26 22:57:44.758: INFO: Waiting for pod var-expansion-405850b2-6aae-487c-a12f-28e23331bb06 to disappear
Jan 26 22:57:44.865: INFO: Pod var-expansion-405850b2-6aae-487c-a12f-28e23331bb06 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:57:44.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-3677" for this suite.

• [SLOW TEST:8.401 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":272,"skipped":4476,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:57:44.877: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 26 22:57:45.032: INFO: Waiting up to 5m0s for pod "busybox-user-65534-36e40a03-d4d4-44d1-b17f-3a88dc30bb25" in namespace "security-context-test-2901" to be "success or failure"
Jan 26 22:57:45.037: INFO: Pod "busybox-user-65534-36e40a03-d4d4-44d1-b17f-3a88dc30bb25": Phase="Pending", Reason="", readiness=false. Elapsed: 5.417143ms
Jan 26 22:57:47.043: INFO: Pod "busybox-user-65534-36e40a03-d4d4-44d1-b17f-3a88dc30bb25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010987272s
Jan 26 22:57:49.049: INFO: Pod "busybox-user-65534-36e40a03-d4d4-44d1-b17f-3a88dc30bb25": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016779048s
Jan 26 22:57:51.056: INFO: Pod "busybox-user-65534-36e40a03-d4d4-44d1-b17f-3a88dc30bb25": Phase="Pending", Reason="", readiness=false. Elapsed: 6.023808206s
Jan 26 22:57:53.067: INFO: Pod "busybox-user-65534-36e40a03-d4d4-44d1-b17f-3a88dc30bb25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.035270295s
Jan 26 22:57:53.067: INFO: Pod "busybox-user-65534-36e40a03-d4d4-44d1-b17f-3a88dc30bb25" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:57:53.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2901" for this suite.

• [SLOW TEST:8.223 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a container with runAsUser
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:43
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":273,"skipped":4476,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:57:53.100: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 26 22:57:53.212: INFO: (0) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 11.30505ms)
Jan 26 22:57:53.219: INFO: (1) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 6.635947ms)
Jan 26 22:57:53.225: INFO: (2) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.161592ms)
Jan 26 22:57:53.228: INFO: (3) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.811276ms)
Jan 26 22:57:53.232: INFO: (4) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.708266ms)
Jan 26 22:57:53.236: INFO: (5) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.99424ms)
Jan 26 22:57:53.240: INFO: (6) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.035833ms)
Jan 26 22:57:53.244: INFO: (7) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.575395ms)
Jan 26 22:57:53.247: INFO: (8) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.3996ms)
Jan 26 22:57:53.250: INFO: (9) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 2.683018ms)
Jan 26 22:57:53.254: INFO: (10) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.867722ms)
Jan 26 22:57:53.257: INFO: (11) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.528551ms)
Jan 26 22:57:53.261: INFO: (12) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.521294ms)
Jan 26 22:57:53.265: INFO: (13) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.701882ms)
Jan 26 22:57:53.268: INFO: (14) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.191455ms)
Jan 26 22:57:53.272: INFO: (15) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.365709ms)
Jan 26 22:57:53.276: INFO: (16) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.243907ms)
Jan 26 22:57:53.284: INFO: (17) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 8.400621ms)
Jan 26 22:57:53.287: INFO: (18) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.411824ms)
Jan 26 22:57:53.291: INFO: (19) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.38183ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:57:53.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4033" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]","total":278,"completed":274,"skipped":4484,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}

------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:57:53.334: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 26 22:57:54.160: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 26 22:57:56.180: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715676274, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715676274, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715676274, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715676274, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 22:57:58.260: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715676274, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715676274, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715676274, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715676274, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 22:58:00.188: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715676274, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715676274, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715676274, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715676274, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 26 22:58:02.230: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715676274, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715676274, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715676274, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715676274, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 26 22:58:05.196: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 26 22:58:06.022: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6996-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:58:07.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5512" for this suite.
STEP: Destroying namespace "webhook-5512-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:14.309 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":275,"skipped":4484,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:58:07.643: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on node default medium
Jan 26 22:58:07.852: INFO: Waiting up to 5m0s for pod "pod-26ec7e2d-fe45-469f-b393-ff9a038aa03a" in namespace "emptydir-6675" to be "success or failure"
Jan 26 22:58:07.883: INFO: Pod "pod-26ec7e2d-fe45-469f-b393-ff9a038aa03a": Phase="Pending", Reason="", readiness=false. Elapsed: 30.861894ms
Jan 26 22:58:09.892: INFO: Pod "pod-26ec7e2d-fe45-469f-b393-ff9a038aa03a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040634216s
Jan 26 22:58:11.899: INFO: Pod "pod-26ec7e2d-fe45-469f-b393-ff9a038aa03a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047616481s
Jan 26 22:58:13.914: INFO: Pod "pod-26ec7e2d-fe45-469f-b393-ff9a038aa03a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062521108s
Jan 26 22:58:15.921: INFO: Pod "pod-26ec7e2d-fe45-469f-b393-ff9a038aa03a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.068987686s
Jan 26 22:58:17.930: INFO: Pod "pod-26ec7e2d-fe45-469f-b393-ff9a038aa03a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.078681135s
STEP: Saw pod success
Jan 26 22:58:17.930: INFO: Pod "pod-26ec7e2d-fe45-469f-b393-ff9a038aa03a" satisfied condition "success or failure"
Jan 26 22:58:17.936: INFO: Trying to get logs from node jerma-node pod pod-26ec7e2d-fe45-469f-b393-ff9a038aa03a container test-container: 
STEP: delete the pod
Jan 26 22:58:17.993: INFO: Waiting for pod pod-26ec7e2d-fe45-469f-b393-ff9a038aa03a to disappear
Jan 26 22:58:18.003: INFO: Pod pod-26ec7e2d-fe45-469f-b393-ff9a038aa03a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:58:18.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6675" for this suite.

• [SLOW TEST:10.375 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":276,"skipped":4502,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:58:18.019: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Pod that fits quota
STEP: Ensuring ResourceQuota status captures the pod usage
STEP: Not allowing a pod to be created that exceeds remaining quota
STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
STEP: Ensuring a pod cannot update its resource requirements
STEP: Ensuring attempts to update pod resource requirements did not change quota usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 26 22:58:31.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-648" for this suite.

• [SLOW TEST:13.265 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":277,"skipped":4507,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSJan 26 22:58:31.285: INFO: Running AfterSuite actions on all nodes
Jan 26 22:58:31.285: INFO: Running AfterSuite actions on node 1
Jan 26 22:58:31.285: INFO: Skipping dumping logs from cluster
{"msg":"Test Suite completed","total":278,"completed":277,"skipped":4536,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}


Summarizing 1 Failure:

[Fail] [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] [It] Should recreate evicted statefulset [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:742

Ran 278 of 4814 Specs in 6635.978 seconds
FAIL! -- 277 Passed | 1 Failed | 0 Pending | 4536 Skipped
--- FAIL: TestE2E (6636.11s)
FAIL