I0909 17:56:21.510930 6 e2e.go:224] Starting e2e run "c4a8a748-f2c5-11ea-88c2-0242ac110007" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1599674180 - Will randomize all specs Will run 201 of 2164 specs Sep 9 17:56:21.667: INFO: >>> kubeConfig: /root/.kube/config Sep 9 17:56:21.671: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Sep 9 17:56:21.685: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Sep 9 17:56:21.719: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Sep 9 17:56:21.719: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Sep 9 17:56:21.719: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Sep 9 17:56:21.726: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Sep 9 17:56:21.726: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Sep 9 17:56:21.726: INFO: e2e test version: v1.13.12 Sep 9 17:56:21.727: INFO: kube-apiserver version: v1.13.12 SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 17:56:21.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition Sep 9 17:56:21.820: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Sep 9 17:56:21.822: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 17:56:22.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-8fg27" for this suite. Sep 9 17:56:28.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 17:56:28.929: INFO: namespace: e2e-tests-custom-resource-definition-8fg27, resource: bindings, ignored listing per whitelist Sep 9 17:56:28.984: INFO: namespace e2e-tests-custom-resource-definition-8fg27 deletion completed in 6.083725725s • [SLOW TEST:7.257 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 17:56:28.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Sep 9 17:56:29.104: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c9782700-f2c5-11ea-88c2-0242ac110007" in namespace "e2e-tests-projected-qj2dk" to be "success or failure" Sep 9 17:56:29.135: INFO: Pod "downwardapi-volume-c9782700-f2c5-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 30.200918ms Sep 9 17:56:31.173: INFO: Pod "downwardapi-volume-c9782700-f2c5-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068811799s Sep 9 17:56:33.176: INFO: Pod "downwardapi-volume-c9782700-f2c5-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071551209s Sep 9 17:56:35.602: INFO: Pod "downwardapi-volume-c9782700-f2c5-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.497816572s Sep 9 17:56:38.009: INFO: Pod "downwardapi-volume-c9782700-f2c5-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.904120925s Sep 9 17:56:40.247: INFO: Pod "downwardapi-volume-c9782700-f2c5-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 11.142594796s Sep 9 17:56:42.250: INFO: Pod "downwardapi-volume-c9782700-f2c5-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 13.145247377s Sep 9 17:56:44.531: INFO: Pod "downwardapi-volume-c9782700-f2c5-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 15.426091955s Sep 9 17:56:47.768: INFO: Pod "downwardapi-volume-c9782700-f2c5-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 18.663312061s Sep 9 17:56:49.771: INFO: Pod "downwardapi-volume-c9782700-f2c5-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 20.666978772s Sep 9 17:56:53.373: INFO: Pod "downwardapi-volume-c9782700-f2c5-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 24.268656485s Sep 9 17:56:59.164: INFO: Pod "downwardapi-volume-c9782700-f2c5-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 30.060064992s Sep 9 17:57:01.200: INFO: Pod "downwardapi-volume-c9782700-f2c5-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 32.095521712s Sep 9 17:57:03.203: INFO: Pod "downwardapi-volume-c9782700-f2c5-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 34.098428219s Sep 9 17:57:05.205: INFO: Pod "downwardapi-volume-c9782700-f2c5-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 36.100931741s Sep 9 17:57:09.299: INFO: Pod "downwardapi-volume-c9782700-f2c5-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 40.194187889s Sep 9 17:57:11.302: INFO: Pod "downwardapi-volume-c9782700-f2c5-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 42.197272757s Sep 9 17:57:13.305: INFO: Pod "downwardapi-volume-c9782700-f2c5-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 44.200565623s Sep 9 17:57:16.504: INFO: Pod "downwardapi-volume-c9782700-f2c5-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 47.399986851s Sep 9 17:57:18.507: INFO: Pod "downwardapi-volume-c9782700-f2c5-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 49.40292561s Sep 9 17:57:20.888: INFO: Pod "downwardapi-volume-c9782700-f2c5-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 51.783611962s Sep 9 17:57:22.896: INFO: Pod "downwardapi-volume-c9782700-f2c5-11ea-88c2-0242ac110007": Phase="Running", Reason="", readiness=true. Elapsed: 53.791791408s Sep 9 17:57:24.920: INFO: Pod "downwardapi-volume-c9782700-f2c5-11ea-88c2-0242ac110007": Phase="Running", Reason="", readiness=true. Elapsed: 55.815379675s Sep 9 17:57:27.095: INFO: Pod "downwardapi-volume-c9782700-f2c5-11ea-88c2-0242ac110007": Phase="Running", Reason="", readiness=true. Elapsed: 57.990235236s Sep 9 17:57:29.693: INFO: Pod "downwardapi-volume-c9782700-f2c5-11ea-88c2-0242ac110007": Phase="Running", Reason="", readiness=true. Elapsed: 1m0.588424902s Sep 9 17:57:31.695: INFO: Pod "downwardapi-volume-c9782700-f2c5-11ea-88c2-0242ac110007": Phase="Running", Reason="", readiness=true. Elapsed: 1m2.590810453s Sep 9 17:57:33.698: INFO: Pod "downwardapi-volume-c9782700-f2c5-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m4.59388004s STEP: Saw pod success Sep 9 17:57:33.698: INFO: Pod "downwardapi-volume-c9782700-f2c5-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 17:57:33.700: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-c9782700-f2c5-11ea-88c2-0242ac110007 container client-container: STEP: delete the pod Sep 9 17:57:35.429: INFO: Waiting for pod downwardapi-volume-c9782700-f2c5-11ea-88c2-0242ac110007 to disappear Sep 9 17:57:35.779: INFO: Pod downwardapi-volume-c9782700-f2c5-11ea-88c2-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 17:57:35.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-qj2dk" for this suite. Sep 9 17:57:45.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 17:57:45.114: INFO: namespace: e2e-tests-projected-qj2dk, resource: bindings, ignored listing per whitelist Sep 9 17:57:45.117: INFO: namespace e2e-tests-projected-qj2dk deletion completed in 8.97789254s • [SLOW TEST:76.132 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 17:57:45.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode Sep 9 17:57:45.207: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-mk6zn" to be "success or failure" Sep 9 17:57:45.227: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 19.520928ms Sep 9 17:57:47.264: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056386017s Sep 9 17:57:49.267: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059449983s Sep 9 17:57:51.339: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.131656546s Sep 9 17:57:53.342: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.134514077s Sep 9 17:57:55.674: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.466041189s Sep 9 17:57:57.933: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.725454135s STEP: Saw pod success Sep 9 17:57:57.933: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Sep 9 17:57:57.936: INFO: Trying to get logs from node hunter-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Sep 9 17:57:58.170: INFO: Waiting for pod pod-host-path-test to disappear Sep 9 17:57:58.230: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 17:57:58.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-mk6zn" for this suite. Sep 9 17:58:07.067: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 17:58:07.242: INFO: namespace: e2e-tests-hostpath-mk6zn, resource: bindings, ignored listing per whitelist Sep 9 17:58:07.278: INFO: namespace e2e-tests-hostpath-mk6zn deletion completed in 9.045661975s • [SLOW TEST:22.161 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 17:58:07.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Sep 9 17:58:09.614: INFO: Waiting up to 5m0s for pod "pod-052f566e-f2c6-11ea-88c2-0242ac110007" in namespace "e2e-tests-emptydir-4hwjt" to be "success or failure" Sep 9 17:58:11.016: INFO: Pod "pod-052f566e-f2c6-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 1.402689638s Sep 9 17:58:13.843: INFO: Pod "pod-052f566e-f2c6-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.22930549s Sep 9 17:58:16.103: INFO: Pod "pod-052f566e-f2c6-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.489268592s Sep 9 17:58:18.192: INFO: Pod "pod-052f566e-f2c6-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.578698497s Sep 9 17:58:20.493: INFO: Pod "pod-052f566e-f2c6-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.87879949s Sep 9 17:58:23.155: INFO: Pod "pod-052f566e-f2c6-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 13.54154523s Sep 9 17:58:25.159: INFO: Pod "pod-052f566e-f2c6-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 15.544813663s Sep 9 17:58:27.400: INFO: Pod "pod-052f566e-f2c6-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 17.786596182s Sep 9 17:58:29.405: INFO: Pod "pod-052f566e-f2c6-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 19.790759193s Sep 9 17:58:31.417: INFO: Pod "pod-052f566e-f2c6-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 21.803751542s Sep 9 17:58:34.598: INFO: Pod "pod-052f566e-f2c6-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 24.984593811s Sep 9 17:58:39.160: INFO: Pod "pod-052f566e-f2c6-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 29.546613526s Sep 9 17:58:41.163: INFO: Pod "pod-052f566e-f2c6-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 31.548899242s Sep 9 17:58:43.417: INFO: Pod "pod-052f566e-f2c6-11ea-88c2-0242ac110007": Phase="Running", Reason="", readiness=true. Elapsed: 33.803748423s Sep 9 17:58:45.424: INFO: Pod "pod-052f566e-f2c6-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.810304907s STEP: Saw pod success Sep 9 17:58:45.424: INFO: Pod "pod-052f566e-f2c6-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 17:58:45.427: INFO: Trying to get logs from node hunter-worker2 pod pod-052f566e-f2c6-11ea-88c2-0242ac110007 container test-container: STEP: delete the pod Sep 9 17:58:46.730: INFO: Waiting for pod pod-052f566e-f2c6-11ea-88c2-0242ac110007 to disappear Sep 9 17:58:46.789: INFO: Pod pod-052f566e-f2c6-11ea-88c2-0242ac110007 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 17:58:46.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-4hwjt" for this suite. Sep 9 17:58:54.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 17:58:54.825: INFO: namespace: e2e-tests-emptydir-4hwjt, resource: bindings, ignored listing per whitelist Sep 9 17:58:54.858: INFO: namespace e2e-tests-emptydir-4hwjt deletion completed in 8.066177723s • [SLOW TEST:47.580 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 17:58:54.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 17:59:08.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-lm2ph" for this suite. Sep 9 18:00:18.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:00:18.788: INFO: namespace: e2e-tests-kubelet-test-lm2ph, resource: bindings, ignored listing per whitelist Sep 9 18:00:18.798: INFO: namespace e2e-tests-kubelet-test-lm2ph deletion completed in 1m10.082455747s • [SLOW TEST:83.939 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:00:18.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod Sep 9 18:00:19.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8h6xk' Sep 9 18:00:22.336: INFO: stderr: "" Sep 9 18:00:22.336: INFO: stdout: "pod/pause created\n" Sep 9 18:00:22.336: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Sep 9 18:00:22.336: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-8h6xk" to be "running and ready" Sep 9 18:00:22.342: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.590839ms Sep 9 18:00:24.471: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135016708s Sep 9 18:00:26.474: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137896404s Sep 9 18:00:28.481: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 6.144763365s Sep 9 18:00:28.481: INFO: Pod "pause" satisfied condition "running and ready" Sep 9 18:00:28.481: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod Sep 9 18:00:28.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-8h6xk' Sep 9 18:00:28.581: INFO: stderr: "" Sep 9 18:00:28.581: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Sep 9 18:00:28.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-8h6xk' Sep 9 18:00:28.678: INFO: stderr: "" Sep 9 18:00:28.678: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 6s testing-label-value\n" STEP: removing the label testing-label of a pod Sep 9 18:00:28.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-8h6xk' Sep 9 18:00:28.773: INFO: stderr: "" Sep 9 18:00:28.774: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Sep 9 18:00:28.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-8h6xk' Sep 9 18:00:28.869: INFO: stderr: "" Sep 9 18:00:28.869: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 6s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources Sep 9 18:00:28.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8h6xk' Sep 9 18:00:29.027: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 9 18:00:29.027: INFO: stdout: "pod \"pause\" force deleted\n" Sep 9 18:00:29.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-8h6xk' Sep 9 18:00:29.173: INFO: stderr: "No resources found.\n" Sep 9 18:00:29.173: INFO: stdout: "" Sep 9 18:00:29.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-8h6xk -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Sep 9 18:00:29.275: INFO: stderr: "" Sep 9 18:00:29.275: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:00:29.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-8h6xk" for this suite. Sep 9 18:00:37.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:00:37.372: INFO: namespace: e2e-tests-kubectl-8h6xk, resource: bindings, ignored listing per whitelist Sep 9 18:00:37.385: INFO: namespace e2e-tests-kubectl-8h6xk deletion completed in 8.107342803s • [SLOW TEST:18.587 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:00:37.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-bs4qg STEP: creating a selector STEP: Creating the service pods in kubernetes Sep 9 18:00:38.990: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Sep 9 18:01:40.892: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.221:8080/dial?request=hostName&protocol=udp&host=10.244.1.220&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-bs4qg PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 9 18:01:40.892: INFO: >>> kubeConfig: /root/.kube/config I0909 18:01:40.920993 6 log.go:172] (0xc00092de40) (0xc0019ec960) Create stream I0909 18:01:40.921029 6 log.go:172] (0xc00092de40) (0xc0019ec960) Stream added, broadcasting: 1 I0909 18:01:40.923126 6 log.go:172] (0xc00092de40) Reply frame received for 1 I0909 18:01:40.923160 6 log.go:172] (0xc00092de40) (0xc001df6f00) Create stream I0909 18:01:40.923173 6 log.go:172] (0xc00092de40) (0xc001df6f00) Stream added, broadcasting: 3 I0909 18:01:40.924133 6 log.go:172] (0xc00092de40) Reply frame received for 3 I0909 18:01:40.924159 6 log.go:172] (0xc00092de40) (0xc0019eca00) Create stream I0909 18:01:40.924168 6 log.go:172] (0xc00092de40) (0xc0019eca00) Stream added, broadcasting: 5 I0909 18:01:40.925110 6 log.go:172] (0xc00092de40) Reply frame received for 5 I0909 18:01:41.059774 6 log.go:172] (0xc00092de40) Data frame received for 3 I0909 18:01:41.059793 6 log.go:172] (0xc001df6f00) (3) Data frame handling I0909 18:01:41.059805 6 log.go:172] (0xc001df6f00) (3) Data frame sent I0909 18:01:41.060392 6 log.go:172] (0xc00092de40) Data frame received for 3 I0909 18:01:41.060434 6 log.go:172] (0xc00092de40) Data frame received for 5 I0909 18:01:41.060491 6 log.go:172] (0xc0019eca00) (5) Data frame handling I0909 18:01:41.060529 6 log.go:172] (0xc001df6f00) (3) Data frame handling I0909 18:01:41.062475 6 log.go:172] (0xc00092de40) Data frame received for 1 I0909 18:01:41.062491 6 log.go:172] (0xc0019ec960) (1) Data frame handling I0909 18:01:41.062500 6 log.go:172] (0xc0019ec960) (1) Data frame sent I0909 18:01:41.062513 6 log.go:172] (0xc00092de40) (0xc0019ec960) Stream removed, broadcasting: 1 I0909 18:01:41.062524 6 log.go:172] (0xc00092de40) Go away received I0909 18:01:41.062654 6 log.go:172] (0xc00092de40) (0xc0019ec960) Stream removed, broadcasting: 1 I0909 18:01:41.062681 6 log.go:172] (0xc00092de40) (0xc001df6f00) Stream removed, broadcasting: 3 I0909 18:01:41.062697 6 log.go:172] (0xc00092de40) (0xc0019eca00) Stream removed, broadcasting: 5 Sep 9 18:01:41.062: INFO: Waiting for endpoints: map[] Sep 9 18:01:41.065: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.221:8080/dial?request=hostName&protocol=udp&host=10.244.2.214&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-bs4qg PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 9 18:01:41.065: INFO: >>> kubeConfig: /root/.kube/config I0909 18:01:41.090156 6 log.go:172] (0xc0014622c0) (0xc0010f1540) Create stream I0909 18:01:41.090175 6 log.go:172] (0xc0014622c0) (0xc0010f1540) Stream added, broadcasting: 1 I0909 18:01:41.091928 6 log.go:172] (0xc0014622c0) Reply frame received for 1 I0909 18:01:41.091953 6 log.go:172] (0xc0014622c0) (0xc001ba0000) Create stream I0909 18:01:41.091962 6 log.go:172] (0xc0014622c0) (0xc001ba0000) Stream added, broadcasting: 3 I0909 18:01:41.092944 6 log.go:172] (0xc0014622c0) Reply frame received for 3 I0909 18:01:41.092996 6 log.go:172] (0xc0014622c0) (0xc0010f15e0) Create stream I0909 18:01:41.093022 6 log.go:172] (0xc0014622c0) (0xc0010f15e0) Stream added, broadcasting: 5 I0909 18:01:41.094248 6 log.go:172] (0xc0014622c0) Reply frame received for 5 I0909 18:01:41.160931 6 log.go:172] (0xc0014622c0) Data frame received for 3 I0909 18:01:41.160972 6 log.go:172] (0xc001ba0000) (3) Data frame handling I0909 18:01:41.160992 6 log.go:172] (0xc001ba0000) (3) Data frame sent I0909 18:01:41.161503 6 log.go:172] (0xc0014622c0) Data frame received for 3 I0909 18:01:41.161522 6 log.go:172] (0xc001ba0000) (3) Data frame handling I0909 18:01:41.161533 6 log.go:172] (0xc0014622c0) Data frame received for 5 I0909 18:01:41.161555 6 log.go:172] (0xc0010f15e0) (5) Data frame handling I0909 18:01:41.162737 6 log.go:172] (0xc0014622c0) Data frame received for 1 I0909 18:01:41.162761 6 log.go:172] (0xc0010f1540) (1) Data frame handling I0909 18:01:41.162772 6 log.go:172] (0xc0010f1540) (1) Data frame sent I0909 18:01:41.162784 6 log.go:172] (0xc0014622c0) (0xc0010f1540) Stream removed, broadcasting: 1 I0909 18:01:41.162864 6 log.go:172] (0xc0014622c0) (0xc0010f1540) Stream removed, broadcasting: 1 I0909 18:01:41.162883 6 log.go:172] (0xc0014622c0) (0xc001ba0000) Stream removed, broadcasting: 3 I0909 18:01:41.162897 6 log.go:172] (0xc0014622c0) (0xc0010f15e0) Stream removed, broadcasting: 5 I0909 18:01:41.162947 6 log.go:172] (0xc0014622c0) Go away received Sep 9 18:01:41.162: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:01:41.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-bs4qg" for this suite. Sep 9 18:02:05.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:02:05.603: INFO: namespace: e2e-tests-pod-network-test-bs4qg, resource: bindings, ignored listing per whitelist Sep 9 18:02:05.618: INFO: namespace e2e-tests-pod-network-test-bs4qg deletion completed in 24.451440133s • [SLOW TEST:88.232 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:02:05.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Sep 9 18:02:05.722: INFO: Waiting up to 5m0s for pod "pod-921c75c8-f2c6-11ea-88c2-0242ac110007" in namespace "e2e-tests-emptydir-nwm2h" to be "success or failure" Sep 9 18:02:05.734: INFO: Pod "pod-921c75c8-f2c6-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 12.725404ms Sep 9 18:02:07.737: INFO: Pod "pod-921c75c8-f2c6-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015785983s Sep 9 18:02:09.817: INFO: Pod "pod-921c75c8-f2c6-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094916842s Sep 9 18:02:11.820: INFO: Pod "pod-921c75c8-f2c6-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.09814679s STEP: Saw pod success Sep 9 18:02:11.820: INFO: Pod "pod-921c75c8-f2c6-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 18:02:11.822: INFO: Trying to get logs from node hunter-worker2 pod pod-921c75c8-f2c6-11ea-88c2-0242ac110007 container test-container: STEP: delete the pod Sep 9 18:02:11.840: INFO: Waiting for pod pod-921c75c8-f2c6-11ea-88c2-0242ac110007 to disappear Sep 9 18:02:11.851: INFO: Pod pod-921c75c8-f2c6-11ea-88c2-0242ac110007 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:02:11.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-nwm2h" for this suite. Sep 9 18:02:20.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:02:20.287: INFO: namespace: e2e-tests-emptydir-nwm2h, resource: bindings, ignored listing per whitelist Sep 9 18:02:20.306: INFO: namespace e2e-tests-emptydir-nwm2h deletion completed in 8.449139667s • [SLOW TEST:14.688 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:02:20.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition Sep 9 18:02:20.465: INFO: Waiting up to 5m0s for pod "var-expansion-9ae5571e-f2c6-11ea-88c2-0242ac110007" in namespace "e2e-tests-var-expansion-hp99n" to be "success or failure" Sep 9 18:02:20.500: INFO: Pod "var-expansion-9ae5571e-f2c6-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 34.440809ms Sep 9 18:02:22.505: INFO: Pod "var-expansion-9ae5571e-f2c6-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039773078s Sep 9 18:02:24.626: INFO: Pod "var-expansion-9ae5571e-f2c6-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.160911334s STEP: Saw pod success Sep 9 18:02:24.626: INFO: Pod "var-expansion-9ae5571e-f2c6-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 18:02:24.629: INFO: Trying to get logs from node hunter-worker pod var-expansion-9ae5571e-f2c6-11ea-88c2-0242ac110007 container dapi-container: STEP: delete the pod Sep 9 18:02:24.692: INFO: Waiting for pod var-expansion-9ae5571e-f2c6-11ea-88c2-0242ac110007 to disappear Sep 9 18:02:24.705: INFO: Pod var-expansion-9ae5571e-f2c6-11ea-88c2-0242ac110007 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:02:24.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-hp99n" for this suite. Sep 9 18:02:30.767: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:02:30.787: INFO: namespace: e2e-tests-var-expansion-hp99n, resource: bindings, ignored listing per whitelist Sep 9 18:02:30.883: INFO: namespace e2e-tests-var-expansion-hp99n deletion completed in 6.174786944s • [SLOW TEST:10.577 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:02:30.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod Sep 9 18:02:35.006: INFO: Pod pod-hostip-a12a0e5e-f2c6-11ea-88c2-0242ac110007 has hostIP: 172.18.0.7 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:02:35.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-sbhs2" for this suite. Sep 9 18:02:57.028: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:02:57.056: INFO: namespace: e2e-tests-pods-sbhs2, resource: bindings, ignored listing per whitelist Sep 9 18:02:57.094: INFO: namespace e2e-tests-pods-sbhs2 deletion completed in 22.083571656s • [SLOW TEST:26.210 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:02:57.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:03:04.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-d4hsm" for this suite. Sep 9 18:03:26.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:03:26.474: INFO: namespace: e2e-tests-replication-controller-d4hsm, resource: bindings, ignored listing per whitelist Sep 9 18:03:26.506: INFO: namespace e2e-tests-replication-controller-d4hsm deletion completed in 22.121222887s • [SLOW TEST:29.412 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:03:26.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-d646w Sep 9 18:03:30.671: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-d646w STEP: checking the pod's current state and verifying that restartCount is present Sep 9 18:03:30.674: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:07:31.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-d646w" for this suite. Sep 9 18:07:37.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:07:37.992: INFO: namespace: e2e-tests-container-probe-d646w, resource: bindings, ignored listing per whitelist Sep 9 18:07:38.046: INFO: namespace e2e-tests-container-probe-d646w deletion completed in 6.10960338s • [SLOW TEST:251.539 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:07:38.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Sep 9 18:07:42.700: INFO: Successfully updated pod "pod-update-5846e74d-f2c7-11ea-88c2-0242ac110007" STEP: verifying the updated pod is in kubernetes Sep 9 18:07:42.709: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:07:42.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-nbww5" for this suite. Sep 9 18:08:04.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:08:04.778: INFO: namespace: e2e-tests-pods-nbww5, resource: bindings, ignored listing per whitelist Sep 9 18:08:04.808: INFO: namespace e2e-tests-pods-nbww5 deletion completed in 22.095377317s • [SLOW TEST:26.762 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:08:04.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-6841d9be-f2c7-11ea-88c2-0242ac110007 STEP: Creating a pod to test consume configMaps Sep 9 18:08:05.045: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-68425a27-f2c7-11ea-88c2-0242ac110007" in namespace "e2e-tests-projected-5wnks" to be "success or failure" Sep 9 18:08:05.055: INFO: Pod "pod-projected-configmaps-68425a27-f2c7-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.346389ms Sep 9 18:08:07.060: INFO: Pod "pod-projected-configmaps-68425a27-f2c7-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01460965s Sep 9 18:08:09.064: INFO: Pod "pod-projected-configmaps-68425a27-f2c7-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018582483s STEP: Saw pod success Sep 9 18:08:09.064: INFO: Pod "pod-projected-configmaps-68425a27-f2c7-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 18:08:09.067: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-68425a27-f2c7-11ea-88c2-0242ac110007 container projected-configmap-volume-test: STEP: delete the pod Sep 9 18:08:09.208: INFO: Waiting for pod pod-projected-configmaps-68425a27-f2c7-11ea-88c2-0242ac110007 to disappear Sep 9 18:08:09.217: INFO: Pod pod-projected-configmaps-68425a27-f2c7-11ea-88c2-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:08:09.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-5wnks" for this suite. Sep 9 18:08:15.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:08:15.243: INFO: namespace: e2e-tests-projected-5wnks, resource: bindings, ignored listing per whitelist Sep 9 18:08:15.302: INFO: namespace e2e-tests-projected-5wnks deletion completed in 6.082681505s • [SLOW TEST:10.494 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:08:15.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0909 18:08:55.968703 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Sep 9 18:08:55.968: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:08:55.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-7hmrj" for this suite. Sep 9 18:09:03.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:09:03.995: INFO: namespace: e2e-tests-gc-7hmrj, resource: bindings, ignored listing per whitelist Sep 9 18:09:04.146: INFO: namespace e2e-tests-gc-7hmrj deletion completed in 8.174046666s • [SLOW TEST:48.844 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:09:04.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Sep 9 18:09:10.830: INFO: Waiting up to 5m0s for pod "client-envvars-8f73d6e8-f2c7-11ea-88c2-0242ac110007" in namespace "e2e-tests-pods-gfkc9" to be "success or failure" Sep 9 18:09:10.833: INFO: Pod "client-envvars-8f73d6e8-f2c7-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.870622ms Sep 9 18:09:12.836: INFO: Pod "client-envvars-8f73d6e8-f2c7-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006319921s Sep 9 18:09:14.840: INFO: Pod "client-envvars-8f73d6e8-f2c7-11ea-88c2-0242ac110007": Phase="Running", Reason="", readiness=true. Elapsed: 4.010081257s Sep 9 18:09:16.844: INFO: Pod "client-envvars-8f73d6e8-f2c7-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013894027s STEP: Saw pod success Sep 9 18:09:16.844: INFO: Pod "client-envvars-8f73d6e8-f2c7-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 18:09:16.846: INFO: Trying to get logs from node hunter-worker pod client-envvars-8f73d6e8-f2c7-11ea-88c2-0242ac110007 container env3cont: STEP: delete the pod Sep 9 18:09:16.926: INFO: Waiting for pod client-envvars-8f73d6e8-f2c7-11ea-88c2-0242ac110007 to disappear Sep 9 18:09:16.962: INFO: Pod client-envvars-8f73d6e8-f2c7-11ea-88c2-0242ac110007 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:09:16.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-gfkc9" for this suite. Sep 9 18:10:02.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:10:03.063: INFO: namespace: e2e-tests-pods-gfkc9, resource: bindings, ignored listing per whitelist Sep 9 18:10:03.071: INFO: namespace e2e-tests-pods-gfkc9 deletion completed in 46.106178836s • [SLOW TEST:58.925 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:10:03.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:11:03.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-54xh6" for this suite. Sep 9 18:11:25.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:11:25.290: INFO: namespace: e2e-tests-container-probe-54xh6, resource: bindings, ignored listing per whitelist Sep 9 18:11:25.290: INFO: namespace e2e-tests-container-probe-54xh6 deletion completed in 22.087904598s • [SLOW TEST:82.218 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:11:25.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-pmrpc/configmap-test-dfb4f6ad-f2c7-11ea-88c2-0242ac110007 STEP: Creating a pod to test consume configMaps Sep 9 18:11:25.420: INFO: Waiting up to 5m0s for pod "pod-configmaps-dfb76282-f2c7-11ea-88c2-0242ac110007" in namespace "e2e-tests-configmap-pmrpc" to be "success or failure" Sep 9 18:11:25.427: INFO: Pod "pod-configmaps-dfb76282-f2c7-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.844386ms Sep 9 18:11:27.431: INFO: Pod "pod-configmaps-dfb76282-f2c7-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0111014s Sep 9 18:11:29.434: INFO: Pod "pod-configmaps-dfb76282-f2c7-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014626643s STEP: Saw pod success Sep 9 18:11:29.434: INFO: Pod "pod-configmaps-dfb76282-f2c7-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 18:11:29.438: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-dfb76282-f2c7-11ea-88c2-0242ac110007 container env-test: STEP: delete the pod Sep 9 18:11:29.483: INFO: Waiting for pod pod-configmaps-dfb76282-f2c7-11ea-88c2-0242ac110007 to disappear Sep 9 18:11:29.497: INFO: Pod pod-configmaps-dfb76282-f2c7-11ea-88c2-0242ac110007 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:11:29.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-pmrpc" for this suite. Sep 9 18:11:35.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:11:35.658: INFO: namespace: e2e-tests-configmap-pmrpc, resource: bindings, ignored listing per whitelist Sep 9 18:11:35.661: INFO: namespace e2e-tests-configmap-pmrpc deletion completed in 6.137731627s • [SLOW TEST:10.371 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:11:35.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-e5ea085f-f2c7-11ea-88c2-0242ac110007 STEP: Creating a pod to test consume configMaps Sep 9 18:11:35.823: INFO: Waiting up to 5m0s for pod "pod-configmaps-e5ed0bcf-f2c7-11ea-88c2-0242ac110007" in namespace "e2e-tests-configmap-rw4td" to be "success or failure" Sep 9 18:11:35.874: INFO: Pod "pod-configmaps-e5ed0bcf-f2c7-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 51.928203ms Sep 9 18:11:37.939: INFO: Pod "pod-configmaps-e5ed0bcf-f2c7-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116208343s Sep 9 18:11:39.943: INFO: Pod "pod-configmaps-e5ed0bcf-f2c7-11ea-88c2-0242ac110007": Phase="Running", Reason="", readiness=true. Elapsed: 4.119980336s Sep 9 18:11:41.946: INFO: Pod "pod-configmaps-e5ed0bcf-f2c7-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.123904467s STEP: Saw pod success Sep 9 18:11:41.947: INFO: Pod "pod-configmaps-e5ed0bcf-f2c7-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 18:11:41.949: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-e5ed0bcf-f2c7-11ea-88c2-0242ac110007 container configmap-volume-test: STEP: delete the pod Sep 9 18:11:42.140: INFO: Waiting for pod pod-configmaps-e5ed0bcf-f2c7-11ea-88c2-0242ac110007 to disappear Sep 9 18:11:42.144: INFO: Pod pod-configmaps-e5ed0bcf-f2c7-11ea-88c2-0242ac110007 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:11:42.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-rw4td" for this suite. Sep 9 18:11:48.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:11:48.195: INFO: namespace: e2e-tests-configmap-rw4td, resource: bindings, ignored listing per whitelist Sep 9 18:11:48.314: INFO: namespace e2e-tests-configmap-rw4td deletion completed in 6.165811682s • [SLOW TEST:12.653 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:11:48.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-67q76 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-67q76 STEP: Deleting pre-stop pod Sep 9 18:12:01.606: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:12:01.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-67q76" for this suite. Sep 9 18:12:41.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:12:41.650: INFO: namespace: e2e-tests-prestop-67q76, resource: bindings, ignored listing per whitelist Sep 9 18:12:41.707: INFO: namespace e2e-tests-prestop-67q76 deletion completed in 40.088954519s • [SLOW TEST:53.393 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:12:41.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-csszr STEP: creating a selector STEP: Creating the service pods in kubernetes Sep 9 18:12:41.817: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Sep 9 18:13:05.943: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.238:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-csszr PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 9 18:13:05.943: INFO: >>> kubeConfig: /root/.kube/config I0909 18:13:05.967766 6 log.go:172] (0xc000fd0420) (0xc000e3d2c0) Create stream I0909 18:13:05.967800 6 log.go:172] (0xc000fd0420) (0xc000e3d2c0) Stream added, broadcasting: 1 I0909 18:13:05.970617 6 log.go:172] (0xc000fd0420) Reply frame received for 1 I0909 18:13:05.970656 6 log.go:172] (0xc000fd0420) (0xc000e3d360) Create stream I0909 18:13:05.970672 6 log.go:172] (0xc000fd0420) (0xc000e3d360) Stream added, broadcasting: 3 I0909 18:13:05.971626 6 log.go:172] (0xc000fd0420) Reply frame received for 3 I0909 18:13:05.971667 6 log.go:172] (0xc000fd0420) (0xc0010f0d20) Create stream I0909 18:13:05.971683 6 log.go:172] (0xc000fd0420) (0xc0010f0d20) Stream added, broadcasting: 5 I0909 18:13:05.972698 6 log.go:172] (0xc000fd0420) Reply frame received for 5 I0909 18:13:06.066069 6 log.go:172] (0xc000fd0420) Data frame received for 5 I0909 18:13:06.066125 6 log.go:172] (0xc0010f0d20) (5) Data frame handling I0909 18:13:06.066160 6 log.go:172] (0xc000fd0420) Data frame received for 3 I0909 18:13:06.066178 6 log.go:172] (0xc000e3d360) (3) Data frame handling I0909 18:13:06.066196 6 log.go:172] (0xc000e3d360) (3) Data frame sent I0909 18:13:06.066210 6 log.go:172] (0xc000fd0420) Data frame received for 3 I0909 18:13:06.066219 6 log.go:172] (0xc000e3d360) (3) Data frame handling I0909 18:13:06.067485 6 log.go:172] (0xc000fd0420) Data frame received for 1 I0909 18:13:06.067512 6 log.go:172] (0xc000e3d2c0) (1) Data frame handling I0909 18:13:06.067521 6 log.go:172] (0xc000e3d2c0) (1) Data frame sent I0909 18:13:06.067531 6 log.go:172] (0xc000fd0420) (0xc000e3d2c0) Stream removed, broadcasting: 1 I0909 18:13:06.067574 6 log.go:172] (0xc000fd0420) Go away received I0909 18:13:06.067603 6 log.go:172] (0xc000fd0420) (0xc000e3d2c0) Stream removed, broadcasting: 1 I0909 18:13:06.067623 6 log.go:172] (0xc000fd0420) (0xc000e3d360) Stream removed, broadcasting: 3 I0909 18:13:06.067633 6 log.go:172] (0xc000fd0420) (0xc0010f0d20) Stream removed, broadcasting: 5 Sep 9 18:13:06.067: INFO: Found all expected endpoints: [netserver-0] Sep 9 18:13:06.070: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.247:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-csszr PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 9 18:13:06.070: INFO: >>> kubeConfig: /root/.kube/config I0909 18:13:06.099134 6 log.go:172] (0xc00088d600) (0xc0010f0fa0) Create stream I0909 18:13:06.099162 6 log.go:172] (0xc00088d600) (0xc0010f0fa0) Stream added, broadcasting: 1 I0909 18:13:06.101423 6 log.go:172] (0xc00088d600) Reply frame received for 1 I0909 18:13:06.101469 6 log.go:172] (0xc00088d600) (0xc00106e320) Create stream I0909 18:13:06.101484 6 log.go:172] (0xc00088d600) (0xc00106e320) Stream added, broadcasting: 3 I0909 18:13:06.102667 6 log.go:172] (0xc00088d600) Reply frame received for 3 I0909 18:13:06.102716 6 log.go:172] (0xc00088d600) (0xc001f4ad20) Create stream I0909 18:13:06.102733 6 log.go:172] (0xc00088d600) (0xc001f4ad20) Stream added, broadcasting: 5 I0909 18:13:06.103805 6 log.go:172] (0xc00088d600) Reply frame received for 5 I0909 18:13:06.185343 6 log.go:172] (0xc00088d600) Data frame received for 3 I0909 18:13:06.185400 6 log.go:172] (0xc00106e320) (3) Data frame handling I0909 18:13:06.185456 6 log.go:172] (0xc00106e320) (3) Data frame sent I0909 18:13:06.185730 6 log.go:172] (0xc00088d600) Data frame received for 5 I0909 18:13:06.185796 6 log.go:172] (0xc001f4ad20) (5) Data frame handling I0909 18:13:06.185839 6 log.go:172] (0xc00088d600) Data frame received for 3 I0909 18:13:06.185864 6 log.go:172] (0xc00106e320) (3) Data frame handling I0909 18:13:06.188148 6 log.go:172] (0xc00088d600) Data frame received for 1 I0909 18:13:06.188196 6 log.go:172] (0xc0010f0fa0) (1) Data frame handling I0909 18:13:06.188229 6 log.go:172] (0xc0010f0fa0) (1) Data frame sent I0909 18:13:06.188272 6 log.go:172] (0xc00088d600) (0xc0010f0fa0) Stream removed, broadcasting: 1 I0909 18:13:06.188403 6 log.go:172] (0xc00088d600) (0xc0010f0fa0) Stream removed, broadcasting: 1 I0909 18:13:06.188458 6 log.go:172] (0xc00088d600) (0xc00106e320) Stream removed, broadcasting: 3 I0909 18:13:06.188488 6 log.go:172] (0xc00088d600) (0xc001f4ad20) Stream removed, broadcasting: 5 Sep 9 18:13:06.188: INFO: Found all expected endpoints: [netserver-1] I0909 18:13:06.188568 6 log.go:172] (0xc00088d600) Go away received [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:13:06.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-csszr" for this suite. Sep 9 18:13:30.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:13:30.237: INFO: namespace: e2e-tests-pod-network-test-csszr, resource: bindings, ignored listing per whitelist Sep 9 18:13:30.286: INFO: namespace e2e-tests-pod-network-test-csszr deletion completed in 24.092586067s • [SLOW TEST:48.578 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:13:30.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-85lg4 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-85lg4 STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-85lg4 Sep 9 18:13:30.406: INFO: Found 0 stateful pods, waiting for 1 Sep 9 18:13:40.411: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Sep 9 18:13:40.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-85lg4 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Sep 9 18:13:40.719: INFO: stderr: "I0909 18:13:40.558604 219 log.go:172] (0xc00015c790) (0xc0006914a0) Create stream\nI0909 18:13:40.558659 219 log.go:172] (0xc00015c790) (0xc0006914a0) Stream added, broadcasting: 1\nI0909 18:13:40.561245 219 log.go:172] (0xc00015c790) Reply frame received for 1\nI0909 18:13:40.561274 219 log.go:172] (0xc00015c790) (0xc000144000) Create stream\nI0909 18:13:40.561287 219 log.go:172] (0xc00015c790) (0xc000144000) Stream added, broadcasting: 3\nI0909 18:13:40.562384 219 log.go:172] (0xc00015c790) Reply frame received for 3\nI0909 18:13:40.562443 219 log.go:172] (0xc00015c790) (0xc000668000) Create stream\nI0909 18:13:40.562464 219 log.go:172] (0xc00015c790) (0xc000668000) Stream added, broadcasting: 5\nI0909 18:13:40.563304 219 log.go:172] (0xc00015c790) Reply frame received for 5\nI0909 18:13:40.711014 219 log.go:172] (0xc00015c790) Data frame received for 3\nI0909 18:13:40.711049 219 log.go:172] (0xc000144000) (3) Data frame handling\nI0909 18:13:40.711149 219 log.go:172] (0xc000144000) (3) Data frame sent\nI0909 18:13:40.711172 219 log.go:172] (0xc00015c790) Data frame received for 3\nI0909 18:13:40.711181 219 log.go:172] (0xc000144000) (3) Data frame handling\nI0909 18:13:40.711237 219 log.go:172] (0xc00015c790) Data frame received for 5\nI0909 18:13:40.711257 219 log.go:172] (0xc000668000) (5) Data frame handling\nI0909 18:13:40.713515 219 log.go:172] (0xc00015c790) Data frame received for 1\nI0909 18:13:40.713527 219 log.go:172] (0xc0006914a0) (1) Data frame handling\nI0909 18:13:40.713540 219 log.go:172] (0xc0006914a0) (1) Data frame sent\nI0909 18:13:40.713550 219 log.go:172] (0xc00015c790) (0xc0006914a0) Stream removed, broadcasting: 1\nI0909 18:13:40.713614 219 log.go:172] (0xc00015c790) Go away received\nI0909 18:13:40.713862 219 log.go:172] (0xc00015c790) (0xc0006914a0) Stream removed, broadcasting: 1\nI0909 18:13:40.713896 219 log.go:172] (0xc00015c790) (0xc000144000) Stream removed, broadcasting: 3\nI0909 18:13:40.713911 219 log.go:172] (0xc00015c790) (0xc000668000) Stream removed, broadcasting: 5\n" Sep 9 18:13:40.719: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Sep 9 18:13:40.719: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Sep 9 18:13:40.723: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Sep 9 18:13:50.749: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Sep 9 18:13:50.749: INFO: Waiting for statefulset status.replicas updated to 0 Sep 9 18:13:50.788: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999784s Sep 9 18:13:51.793: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.968983924s Sep 9 18:13:52.797: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.964230377s Sep 9 18:13:53.801: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.960132907s Sep 9 18:13:54.806: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.955672108s Sep 9 18:13:55.811: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.950928321s Sep 9 18:13:56.816: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.94591012s Sep 9 18:13:57.821: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.941311552s Sep 9 18:13:58.825: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.936552692s Sep 9 18:13:59.830: INFO: Verifying statefulset ss doesn't scale past 1 for another 931.94344ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-85lg4 Sep 9 18:14:00.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-85lg4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 9 18:14:01.061: INFO: stderr: "I0909 18:14:00.969378 241 log.go:172] (0xc00014c8f0) (0xc0005c7400) Create stream\nI0909 18:14:00.969438 241 log.go:172] (0xc00014c8f0) (0xc0005c7400) Stream added, broadcasting: 1\nI0909 18:14:00.971832 241 log.go:172] (0xc00014c8f0) Reply frame received for 1\nI0909 18:14:00.971885 241 log.go:172] (0xc00014c8f0) (0xc0005c74a0) Create stream\nI0909 18:14:00.971901 241 log.go:172] (0xc00014c8f0) (0xc0005c74a0) Stream added, broadcasting: 3\nI0909 18:14:00.972986 241 log.go:172] (0xc00014c8f0) Reply frame received for 3\nI0909 18:14:00.973022 241 log.go:172] (0xc00014c8f0) (0xc0005c7540) Create stream\nI0909 18:14:00.973034 241 log.go:172] (0xc00014c8f0) (0xc0005c7540) Stream added, broadcasting: 5\nI0909 18:14:00.973932 241 log.go:172] (0xc00014c8f0) Reply frame received for 5\nI0909 18:14:01.056767 241 log.go:172] (0xc00014c8f0) Data frame received for 5\nI0909 18:14:01.056807 241 log.go:172] (0xc0005c7540) (5) Data frame handling\nI0909 18:14:01.056830 241 log.go:172] (0xc00014c8f0) Data frame received for 3\nI0909 18:14:01.056836 241 log.go:172] (0xc0005c74a0) (3) Data frame handling\nI0909 18:14:01.056843 241 log.go:172] (0xc0005c74a0) (3) Data frame sent\nI0909 18:14:01.056848 241 log.go:172] (0xc00014c8f0) Data frame received for 3\nI0909 18:14:01.056852 241 log.go:172] (0xc0005c74a0) (3) Data frame handling\nI0909 18:14:01.058162 241 log.go:172] (0xc00014c8f0) Data frame received for 1\nI0909 18:14:01.058189 241 log.go:172] (0xc0005c7400) (1) Data frame handling\nI0909 18:14:01.058208 241 log.go:172] (0xc0005c7400) (1) Data frame sent\nI0909 18:14:01.058233 241 log.go:172] (0xc00014c8f0) (0xc0005c7400) Stream removed, broadcasting: 1\nI0909 18:14:01.058258 241 log.go:172] (0xc00014c8f0) Go away received\nI0909 18:14:01.058438 241 log.go:172] (0xc00014c8f0) (0xc0005c7400) Stream removed, broadcasting: 1\nI0909 18:14:01.058456 241 log.go:172] (0xc00014c8f0) (0xc0005c74a0) Stream removed, broadcasting: 3\nI0909 18:14:01.058466 241 log.go:172] (0xc00014c8f0) (0xc0005c7540) Stream removed, broadcasting: 5\n" Sep 9 18:14:01.061: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Sep 9 18:14:01.061: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Sep 9 18:14:01.065: INFO: Found 1 stateful pods, waiting for 3 Sep 9 18:14:11.070: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Sep 9 18:14:11.070: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Sep 9 18:14:11.070: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Sep 9 18:14:11.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-85lg4 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Sep 9 18:14:11.268: INFO: stderr: "I0909 18:14:11.195288 263 log.go:172] (0xc0008582c0) (0xc000744640) Create stream\nI0909 18:14:11.195330 263 log.go:172] (0xc0008582c0) (0xc000744640) Stream added, broadcasting: 1\nI0909 18:14:11.197829 263 log.go:172] (0xc0008582c0) Reply frame received for 1\nI0909 18:14:11.197883 263 log.go:172] (0xc0008582c0) (0xc0001fadc0) Create stream\nI0909 18:14:11.197900 263 log.go:172] (0xc0008582c0) (0xc0001fadc0) Stream added, broadcasting: 3\nI0909 18:14:11.199090 263 log.go:172] (0xc0008582c0) Reply frame received for 3\nI0909 18:14:11.199127 263 log.go:172] (0xc0008582c0) (0xc0007446e0) Create stream\nI0909 18:14:11.199139 263 log.go:172] (0xc0008582c0) (0xc0007446e0) Stream added, broadcasting: 5\nI0909 18:14:11.200367 263 log.go:172] (0xc0008582c0) Reply frame received for 5\nI0909 18:14:11.263218 263 log.go:172] (0xc0008582c0) Data frame received for 3\nI0909 18:14:11.263248 263 log.go:172] (0xc0001fadc0) (3) Data frame handling\nI0909 18:14:11.263255 263 log.go:172] (0xc0001fadc0) (3) Data frame sent\nI0909 18:14:11.263259 263 log.go:172] (0xc0008582c0) Data frame received for 3\nI0909 18:14:11.263264 263 log.go:172] (0xc0001fadc0) (3) Data frame handling\nI0909 18:14:11.263289 263 log.go:172] (0xc0008582c0) Data frame received for 5\nI0909 18:14:11.263296 263 log.go:172] (0xc0007446e0) (5) Data frame handling\nI0909 18:14:11.264273 263 log.go:172] (0xc0008582c0) Data frame received for 1\nI0909 18:14:11.264297 263 log.go:172] (0xc000744640) (1) Data frame handling\nI0909 18:14:11.264310 263 log.go:172] (0xc000744640) (1) Data frame sent\nI0909 18:14:11.264327 263 log.go:172] (0xc0008582c0) (0xc000744640) Stream removed, broadcasting: 1\nI0909 18:14:11.264511 263 log.go:172] (0xc0008582c0) (0xc000744640) Stream removed, broadcasting: 1\nI0909 18:14:11.264529 263 log.go:172] (0xc0008582c0) (0xc0001fadc0) Stream removed, broadcasting: 3\nI0909 18:14:11.264536 263 log.go:172] (0xc0008582c0) (0xc0007446e0) Stream removed, broadcasting: 5\n" Sep 9 18:14:11.268: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Sep 9 18:14:11.268: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Sep 9 18:14:11.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-85lg4 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Sep 9 18:14:11.562: INFO: stderr: "I0909 18:14:11.433114 286 log.go:172] (0xc00013a840) (0xc000738640) Create stream\nI0909 18:14:11.433199 286 log.go:172] (0xc00013a840) (0xc000738640) Stream added, broadcasting: 1\nI0909 18:14:11.436831 286 log.go:172] (0xc00013a840) Reply frame received for 1\nI0909 18:14:11.436906 286 log.go:172] (0xc00013a840) (0xc0005e8c80) Create stream\nI0909 18:14:11.436929 286 log.go:172] (0xc00013a840) (0xc0005e8c80) Stream added, broadcasting: 3\nI0909 18:14:11.438661 286 log.go:172] (0xc00013a840) Reply frame received for 3\nI0909 18:14:11.438684 286 log.go:172] (0xc00013a840) (0xc0005e8dc0) Create stream\nI0909 18:14:11.438690 286 log.go:172] (0xc00013a840) (0xc0005e8dc0) Stream added, broadcasting: 5\nI0909 18:14:11.439454 286 log.go:172] (0xc00013a840) Reply frame received for 5\nI0909 18:14:11.556066 286 log.go:172] (0xc00013a840) Data frame received for 3\nI0909 18:14:11.556104 286 log.go:172] (0xc0005e8c80) (3) Data frame handling\nI0909 18:14:11.556120 286 log.go:172] (0xc0005e8c80) (3) Data frame sent\nI0909 18:14:11.556130 286 log.go:172] (0xc00013a840) Data frame received for 3\nI0909 18:14:11.556138 286 log.go:172] (0xc0005e8c80) (3) Data frame handling\nI0909 18:14:11.556210 286 log.go:172] (0xc00013a840) Data frame received for 5\nI0909 18:14:11.556222 286 log.go:172] (0xc0005e8dc0) (5) Data frame handling\nI0909 18:14:11.558061 286 log.go:172] (0xc00013a840) Data frame received for 1\nI0909 18:14:11.558080 286 log.go:172] (0xc000738640) (1) Data frame handling\nI0909 18:14:11.558110 286 log.go:172] (0xc000738640) (1) Data frame sent\nI0909 18:14:11.558140 286 log.go:172] (0xc00013a840) (0xc000738640) Stream removed, broadcasting: 1\nI0909 18:14:11.558322 286 log.go:172] (0xc00013a840) (0xc000738640) Stream removed, broadcasting: 1\nI0909 18:14:11.558339 286 log.go:172] (0xc00013a840) (0xc0005e8c80) Stream removed, broadcasting: 3\nI0909 18:14:11.558416 286 log.go:172] (0xc00013a840) Go away received\nI0909 18:14:11.558465 286 log.go:172] (0xc00013a840) (0xc0005e8dc0) Stream removed, broadcasting: 5\n" Sep 9 18:14:11.562: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Sep 9 18:14:11.562: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Sep 9 18:14:11.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-85lg4 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Sep 9 18:14:12.087: INFO: stderr: "I0909 18:14:11.697300 309 log.go:172] (0xc000162840) (0xc00069d400) Create stream\nI0909 18:14:11.697347 309 log.go:172] (0xc000162840) (0xc00069d400) Stream added, broadcasting: 1\nI0909 18:14:11.699747 309 log.go:172] (0xc000162840) Reply frame received for 1\nI0909 18:14:11.699789 309 log.go:172] (0xc000162840) (0xc00060c000) Create stream\nI0909 18:14:11.699803 309 log.go:172] (0xc000162840) (0xc00060c000) Stream added, broadcasting: 3\nI0909 18:14:11.700968 309 log.go:172] (0xc000162840) Reply frame received for 3\nI0909 18:14:11.701049 309 log.go:172] (0xc000162840) (0xc00069a000) Create stream\nI0909 18:14:11.701060 309 log.go:172] (0xc000162840) (0xc00069a000) Stream added, broadcasting: 5\nI0909 18:14:11.702038 309 log.go:172] (0xc000162840) Reply frame received for 5\nI0909 18:14:12.077717 309 log.go:172] (0xc000162840) Data frame received for 3\nI0909 18:14:12.077752 309 log.go:172] (0xc00060c000) (3) Data frame handling\nI0909 18:14:12.077775 309 log.go:172] (0xc00060c000) (3) Data frame sent\nI0909 18:14:12.077806 309 log.go:172] (0xc000162840) Data frame received for 3\nI0909 18:14:12.077812 309 log.go:172] (0xc00060c000) (3) Data frame handling\nI0909 18:14:12.078045 309 log.go:172] (0xc000162840) Data frame received for 5\nI0909 18:14:12.078074 309 log.go:172] (0xc00069a000) (5) Data frame handling\nI0909 18:14:12.082994 309 log.go:172] (0xc000162840) Data frame received for 1\nI0909 18:14:12.083024 309 log.go:172] (0xc00069d400) (1) Data frame handling\nI0909 18:14:12.083045 309 log.go:172] (0xc00069d400) (1) Data frame sent\nI0909 18:14:12.083187 309 log.go:172] (0xc000162840) (0xc00069d400) Stream removed, broadcasting: 1\nI0909 18:14:12.083336 309 log.go:172] (0xc000162840) (0xc00069d400) Stream removed, broadcasting: 1\nI0909 18:14:12.083351 309 log.go:172] (0xc000162840) (0xc00060c000) Stream removed, broadcasting: 3\nI0909 18:14:12.083359 309 log.go:172] (0xc000162840) (0xc00069a000) Stream removed, broadcasting: 5\n" Sep 9 18:14:12.087: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Sep 9 18:14:12.087: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Sep 9 18:14:12.087: INFO: Waiting for statefulset status.replicas updated to 0 Sep 9 18:14:12.244: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Sep 9 18:14:22.252: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Sep 9 18:14:22.252: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Sep 9 18:14:22.252: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Sep 9 18:14:22.271: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999538s Sep 9 18:14:23.277: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.987756605s Sep 9 18:14:24.284: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.98172309s Sep 9 18:14:25.295: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.974639284s Sep 9 18:14:26.299: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.963729637s Sep 9 18:14:27.313: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.959207851s Sep 9 18:14:28.319: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.945345124s Sep 9 18:14:29.325: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.939967484s Sep 9 18:14:30.330: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.933651623s Sep 9 18:14:31.350: INFO: Verifying statefulset ss doesn't scale past 3 for another 928.183183ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-85lg4 Sep 9 18:14:32.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-85lg4 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 9 18:14:32.586: INFO: stderr: "I0909 18:14:32.499628 331 log.go:172] (0xc0008582c0) (0xc000740640) Create stream\nI0909 18:14:32.499694 331 log.go:172] (0xc0008582c0) (0xc000740640) Stream added, broadcasting: 1\nI0909 18:14:32.502239 331 log.go:172] (0xc0008582c0) Reply frame received for 1\nI0909 18:14:32.502295 331 log.go:172] (0xc0008582c0) (0xc000622be0) Create stream\nI0909 18:14:32.502309 331 log.go:172] (0xc0008582c0) (0xc000622be0) Stream added, broadcasting: 3\nI0909 18:14:32.503374 331 log.go:172] (0xc0008582c0) Reply frame received for 3\nI0909 18:14:32.503422 331 log.go:172] (0xc0008582c0) (0xc000410000) Create stream\nI0909 18:14:32.503441 331 log.go:172] (0xc0008582c0) (0xc000410000) Stream added, broadcasting: 5\nI0909 18:14:32.504289 331 log.go:172] (0xc0008582c0) Reply frame received for 5\nI0909 18:14:32.580537 331 log.go:172] (0xc0008582c0) Data frame received for 3\nI0909 18:14:32.580571 331 log.go:172] (0xc000622be0) (3) Data frame handling\nI0909 18:14:32.580580 331 log.go:172] (0xc000622be0) (3) Data frame sent\nI0909 18:14:32.580585 331 log.go:172] (0xc0008582c0) Data frame received for 3\nI0909 18:14:32.580590 331 log.go:172] (0xc000622be0) (3) Data frame handling\nI0909 18:14:32.580635 331 log.go:172] (0xc0008582c0) Data frame received for 5\nI0909 18:14:32.580691 331 log.go:172] (0xc000410000) (5) Data frame handling\nI0909 18:14:32.581821 331 log.go:172] (0xc0008582c0) Data frame received for 1\nI0909 18:14:32.581837 331 log.go:172] (0xc000740640) (1) Data frame handling\nI0909 18:14:32.581844 331 log.go:172] (0xc000740640) (1) Data frame sent\nI0909 18:14:32.581903 331 log.go:172] (0xc0008582c0) (0xc000740640) Stream removed, broadcasting: 1\nI0909 18:14:32.581965 331 log.go:172] (0xc0008582c0) Go away received\nI0909 18:14:32.582071 331 log.go:172] (0xc0008582c0) (0xc000740640) Stream removed, broadcasting: 1\nI0909 18:14:32.582085 331 log.go:172] (0xc0008582c0) (0xc000622be0) Stream removed, broadcasting: 3\nI0909 18:14:32.582092 331 log.go:172] (0xc0008582c0) (0xc000410000) Stream removed, broadcasting: 5\n" Sep 9 18:14:32.586: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Sep 9 18:14:32.586: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Sep 9 18:14:32.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-85lg4 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 9 18:14:32.782: INFO: stderr: "I0909 18:14:32.717852 353 log.go:172] (0xc000138840) (0xc0005c55e0) Create stream\nI0909 18:14:32.717912 353 log.go:172] (0xc000138840) (0xc0005c55e0) Stream added, broadcasting: 1\nI0909 18:14:32.720379 353 log.go:172] (0xc000138840) Reply frame received for 1\nI0909 18:14:32.720428 353 log.go:172] (0xc000138840) (0xc0005c5680) Create stream\nI0909 18:14:32.720444 353 log.go:172] (0xc000138840) (0xc0005c5680) Stream added, broadcasting: 3\nI0909 18:14:32.721484 353 log.go:172] (0xc000138840) Reply frame received for 3\nI0909 18:14:32.721531 353 log.go:172] (0xc000138840) (0xc0005c5720) Create stream\nI0909 18:14:32.721542 353 log.go:172] (0xc000138840) (0xc0005c5720) Stream added, broadcasting: 5\nI0909 18:14:32.722480 353 log.go:172] (0xc000138840) Reply frame received for 5\nI0909 18:14:32.775429 353 log.go:172] (0xc000138840) Data frame received for 5\nI0909 18:14:32.775446 353 log.go:172] (0xc0005c5720) (5) Data frame handling\nI0909 18:14:32.775463 353 log.go:172] (0xc000138840) Data frame received for 3\nI0909 18:14:32.775503 353 log.go:172] (0xc0005c5680) (3) Data frame handling\nI0909 18:14:32.775538 353 log.go:172] (0xc0005c5680) (3) Data frame sent\nI0909 18:14:32.775777 353 log.go:172] (0xc000138840) Data frame received for 3\nI0909 18:14:32.775800 353 log.go:172] (0xc0005c5680) (3) Data frame handling\nI0909 18:14:32.777679 353 log.go:172] (0xc000138840) Data frame received for 1\nI0909 18:14:32.777710 353 log.go:172] (0xc0005c55e0) (1) Data frame handling\nI0909 18:14:32.777812 353 log.go:172] (0xc0005c55e0) (1) Data frame sent\nI0909 18:14:32.777837 353 log.go:172] (0xc000138840) (0xc0005c55e0) Stream removed, broadcasting: 1\nI0909 18:14:32.777869 353 log.go:172] (0xc000138840) Go away received\nI0909 18:14:32.778159 353 log.go:172] (0xc000138840) (0xc0005c55e0) Stream removed, broadcasting: 1\nI0909 18:14:32.778186 353 log.go:172] (0xc000138840) (0xc0005c5680) Stream removed, broadcasting: 3\nI0909 18:14:32.778199 353 log.go:172] (0xc000138840) (0xc0005c5720) Stream removed, broadcasting: 5\n" Sep 9 18:14:32.782: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Sep 9 18:14:32.782: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Sep 9 18:14:32.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-85lg4 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 9 18:14:33.010: INFO: stderr: "I0909 18:14:32.933775 375 log.go:172] (0xc00077a160) (0xc00068a6e0) Create stream\nI0909 18:14:32.933832 375 log.go:172] (0xc00077a160) (0xc00068a6e0) Stream added, broadcasting: 1\nI0909 18:14:32.938346 375 log.go:172] (0xc00077a160) Reply frame received for 1\nI0909 18:14:32.938388 375 log.go:172] (0xc00077a160) (0xc0006e8000) Create stream\nI0909 18:14:32.938405 375 log.go:172] (0xc00077a160) (0xc0006e8000) Stream added, broadcasting: 3\nI0909 18:14:32.939826 375 log.go:172] (0xc00077a160) Reply frame received for 3\nI0909 18:14:32.939870 375 log.go:172] (0xc00077a160) (0xc0006e8140) Create stream\nI0909 18:14:32.939885 375 log.go:172] (0xc00077a160) (0xc0006e8140) Stream added, broadcasting: 5\nI0909 18:14:32.941007 375 log.go:172] (0xc00077a160) Reply frame received for 5\nI0909 18:14:33.006152 375 log.go:172] (0xc00077a160) Data frame received for 5\nI0909 18:14:33.006175 375 log.go:172] (0xc0006e8140) (5) Data frame handling\nI0909 18:14:33.006234 375 log.go:172] (0xc00077a160) Data frame received for 3\nI0909 18:14:33.006247 375 log.go:172] (0xc0006e8000) (3) Data frame handling\nI0909 18:14:33.006255 375 log.go:172] (0xc0006e8000) (3) Data frame sent\nI0909 18:14:33.006261 375 log.go:172] (0xc00077a160) Data frame received for 3\nI0909 18:14:33.006267 375 log.go:172] (0xc0006e8000) (3) Data frame handling\nI0909 18:14:33.007625 375 log.go:172] (0xc00077a160) Data frame received for 1\nI0909 18:14:33.007646 375 log.go:172] (0xc00068a6e0) (1) Data frame handling\nI0909 18:14:33.007657 375 log.go:172] (0xc00068a6e0) (1) Data frame sent\nI0909 18:14:33.007670 375 log.go:172] (0xc00077a160) (0xc00068a6e0) Stream removed, broadcasting: 1\nI0909 18:14:33.007683 375 log.go:172] (0xc00077a160) Go away received\nI0909 18:14:33.007857 375 log.go:172] (0xc00077a160) (0xc00068a6e0) Stream removed, broadcasting: 1\nI0909 18:14:33.007877 375 log.go:172] (0xc00077a160) (0xc0006e8000) Stream removed, broadcasting: 3\nI0909 18:14:33.007885 375 log.go:172] (0xc00077a160) (0xc0006e8140) Stream removed, broadcasting: 5\n" Sep 9 18:14:33.010: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Sep 9 18:14:33.010: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Sep 9 18:14:33.010: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Sep 9 18:14:53.057: INFO: Deleting all statefulset in ns e2e-tests-statefulset-85lg4 Sep 9 18:14:53.087: INFO: Scaling statefulset ss to 0 Sep 9 18:14:53.094: INFO: Waiting for statefulset status.replicas updated to 0 Sep 9 18:14:53.097: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:14:53.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-85lg4" for this suite. Sep 9 18:14:59.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:14:59.206: INFO: namespace: e2e-tests-statefulset-85lg4, resource: bindings, ignored listing per whitelist Sep 9 18:14:59.233: INFO: namespace e2e-tests-statefulset-85lg4 deletion completed in 6.098988043s • [SLOW TEST:88.947 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:14:59.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-5f404d06-f2c8-11ea-88c2-0242ac110007 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-5f404d06-f2c8-11ea-88c2-0242ac110007 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:15:07.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-l9xjm" for this suite. Sep 9 18:15:29.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:15:29.472: INFO: namespace: e2e-tests-configmap-l9xjm, resource: bindings, ignored listing per whitelist Sep 9 18:15:29.535: INFO: namespace e2e-tests-configmap-l9xjm deletion completed in 22.095023654s • [SLOW TEST:30.303 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:15:29.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Sep 9 18:15:29.694: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:15:33.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-shhkb" for this suite. Sep 9 18:16:23.806: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:16:23.879: INFO: namespace: e2e-tests-pods-shhkb, resource: bindings, ignored listing per whitelist Sep 9 18:16:23.885: INFO: namespace e2e-tests-pods-shhkb deletion completed in 50.08862277s • [SLOW TEST:54.350 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:16:23.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions Sep 9 18:16:23.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Sep 9 18:16:24.138: INFO: stderr: "" Sep 9 18:16:24.139: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:16:24.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-xl456" for this suite. Sep 9 18:16:30.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:16:30.225: INFO: namespace: e2e-tests-kubectl-xl456, resource: bindings, ignored listing per whitelist Sep 9 18:16:30.239: INFO: namespace e2e-tests-kubectl-xl456 deletion completed in 6.096361551s • [SLOW TEST:6.353 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:16:30.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-957a0630-f2c8-11ea-88c2-0242ac110007 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-957a0630-f2c8-11ea-88c2-0242ac110007 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:17:40.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-fzztb" for this suite. Sep 9 18:18:02.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:18:02.833: INFO: namespace: e2e-tests-projected-fzztb, resource: bindings, ignored listing per whitelist Sep 9 18:18:02.855: INFO: namespace e2e-tests-projected-fzztb deletion completed in 22.086921958s • [SLOW TEST:92.616 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:18:02.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Sep 9 18:18:03.044: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-z2cln,SelfLink:/api/v1/namespaces/e2e-tests-watch-z2cln/configmaps/e2e-watch-test-resource-version,UID:ccb06ee6-f2c8-11ea-b060-0242ac120006,ResourceVersion:730484,Generation:0,CreationTimestamp:2020-09-09 18:18:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Sep 9 18:18:03.044: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-z2cln,SelfLink:/api/v1/namespaces/e2e-tests-watch-z2cln/configmaps/e2e-watch-test-resource-version,UID:ccb06ee6-f2c8-11ea-b060-0242ac120006,ResourceVersion:730485,Generation:0,CreationTimestamp:2020-09-09 18:18:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:18:03.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-z2cln" for this suite. Sep 9 18:18:09.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:18:09.111: INFO: namespace: e2e-tests-watch-z2cln, resource: bindings, ignored listing per whitelist Sep 9 18:18:09.130: INFO: namespace e2e-tests-watch-z2cln deletion completed in 6.082366018s • [SLOW TEST:6.275 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:18:09.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Sep 9 18:18:09.245: INFO: Waiting up to 5m0s for pod "pod-d06cd816-f2c8-11ea-88c2-0242ac110007" in namespace "e2e-tests-emptydir-hdjh7" to be "success or failure" Sep 9 18:18:09.248: INFO: Pod "pod-d06cd816-f2c8-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 3.045151ms Sep 9 18:18:11.252: INFO: Pod "pod-d06cd816-f2c8-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007182175s Sep 9 18:18:13.257: INFO: Pod "pod-d06cd816-f2c8-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011557866s STEP: Saw pod success Sep 9 18:18:13.257: INFO: Pod "pod-d06cd816-f2c8-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 18:18:13.260: INFO: Trying to get logs from node hunter-worker pod pod-d06cd816-f2c8-11ea-88c2-0242ac110007 container test-container: STEP: delete the pod Sep 9 18:18:13.298: INFO: Waiting for pod pod-d06cd816-f2c8-11ea-88c2-0242ac110007 to disappear Sep 9 18:18:13.338: INFO: Pod pod-d06cd816-f2c8-11ea-88c2-0242ac110007 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:18:13.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-hdjh7" for this suite. Sep 9 18:18:19.379: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:18:19.433: INFO: namespace: e2e-tests-emptydir-hdjh7, resource: bindings, ignored listing per whitelist Sep 9 18:18:19.462: INFO: namespace e2e-tests-emptydir-hdjh7 deletion completed in 6.119470126s • [SLOW TEST:10.331 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:18:19.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy Sep 9 18:18:19.548: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix402175976/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:18:19.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-khfj7" for this suite. Sep 9 18:18:25.632: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:18:25.677: INFO: namespace: e2e-tests-kubectl-khfj7, resource: bindings, ignored listing per whitelist Sep 9 18:18:25.706: INFO: namespace e2e-tests-kubectl-khfj7 deletion completed in 6.093928733s • [SLOW TEST:6.243 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:18:25.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-da4f0070-f2c8-11ea-88c2-0242ac110007 STEP: Creating configMap with name cm-test-opt-upd-da4f00b9-f2c8-11ea-88c2-0242ac110007 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-da4f0070-f2c8-11ea-88c2-0242ac110007 STEP: Updating configmap cm-test-opt-upd-da4f00b9-f2c8-11ea-88c2-0242ac110007 STEP: Creating configMap with name cm-test-opt-create-da4f00d4-f2c8-11ea-88c2-0242ac110007 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:19:58.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-tj4n5" for this suite. Sep 9 18:20:22.360: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:20:22.382: INFO: namespace: e2e-tests-configmap-tj4n5, resource: bindings, ignored listing per whitelist Sep 9 18:20:22.438: INFO: namespace e2e-tests-configmap-tj4n5 deletion completed in 24.087495228s • [SLOW TEST:116.732 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:20:22.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-bcmpc Sep 9 18:20:26.574: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-bcmpc STEP: checking the pod's current state and verifying that restartCount is present Sep 9 18:20:26.577: INFO: Initial restart count of pod liveness-http is 0 Sep 9 18:20:44.741: INFO: Restart count of pod e2e-tests-container-probe-bcmpc/liveness-http is now 1 (18.164026459s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:20:44.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-bcmpc" for this suite. Sep 9 18:20:50.853: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:20:50.913: INFO: namespace: e2e-tests-container-probe-bcmpc, resource: bindings, ignored listing per whitelist Sep 9 18:20:50.916: INFO: namespace e2e-tests-container-probe-bcmpc deletion completed in 6.126924915s • [SLOW TEST:28.478 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:20:50.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-plkjh I0909 18:20:51.027255 6 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-plkjh, replica count: 1 I0909 18:20:52.077757 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0909 18:20:53.078013 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0909 18:20:54.078256 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 9 18:20:54.228: INFO: Created: latency-svc-9sgv6 Sep 9 18:20:54.279: INFO: Got endpoints: latency-svc-9sgv6 [101.352014ms] Sep 9 18:20:54.342: INFO: Created: latency-svc-jkh47 Sep 9 18:20:54.352: INFO: Got endpoints: latency-svc-jkh47 [72.416365ms] Sep 9 18:20:54.377: INFO: Created: latency-svc-dvzv7 Sep 9 18:20:54.394: INFO: Got endpoints: latency-svc-dvzv7 [114.183791ms] Sep 9 18:20:54.444: INFO: Created: latency-svc-rkvww Sep 9 18:20:54.446: INFO: Got endpoints: latency-svc-rkvww [166.376419ms] Sep 9 18:20:54.485: INFO: Created: latency-svc-tlbcc Sep 9 18:20:54.496: INFO: Got endpoints: latency-svc-tlbcc [216.19507ms] Sep 9 18:20:54.528: INFO: Created: latency-svc-9mwq2 Sep 9 18:20:54.575: INFO: Got endpoints: latency-svc-9mwq2 [295.305048ms] Sep 9 18:20:54.587: INFO: Created: latency-svc-49kmc Sep 9 18:20:54.598: INFO: Got endpoints: latency-svc-49kmc [318.514699ms] Sep 9 18:20:54.636: INFO: Created: latency-svc-n2ngj Sep 9 18:20:54.658: INFO: Got endpoints: latency-svc-n2ngj [379.024212ms] Sep 9 18:20:54.731: INFO: Created: latency-svc-lhfzl Sep 9 18:20:54.733: INFO: Got endpoints: latency-svc-lhfzl [453.643881ms] Sep 9 18:20:54.761: INFO: Created: latency-svc-j2t2f Sep 9 18:20:54.773: INFO: Got endpoints: latency-svc-j2t2f [492.976502ms] Sep 9 18:20:54.799: INFO: Created: latency-svc-766n4 Sep 9 18:20:54.862: INFO: Got endpoints: latency-svc-766n4 [582.460644ms] Sep 9 18:20:54.884: INFO: Created: latency-svc-cgmxc Sep 9 18:20:54.900: INFO: Got endpoints: latency-svc-cgmxc [620.231626ms] Sep 9 18:20:54.942: INFO: Created: latency-svc-qcqtg Sep 9 18:20:54.953: INFO: Got endpoints: latency-svc-qcqtg [673.4442ms] Sep 9 18:20:55.023: INFO: Created: latency-svc-zcg4m Sep 9 18:20:55.037: INFO: Got endpoints: latency-svc-zcg4m [757.612458ms] Sep 9 18:20:55.067: INFO: Created: latency-svc-nknpk Sep 9 18:20:55.086: INFO: Got endpoints: latency-svc-nknpk [805.796701ms] Sep 9 18:20:55.109: INFO: Created: latency-svc-bkz7r Sep 9 18:20:55.173: INFO: Got endpoints: latency-svc-bkz7r [893.857654ms] Sep 9 18:20:55.200: INFO: Created: latency-svc-k262t Sep 9 18:20:55.218: INFO: Got endpoints: latency-svc-k262t [865.870762ms] Sep 9 18:20:55.271: INFO: Created: latency-svc-dfh5c Sep 9 18:20:55.335: INFO: Got endpoints: latency-svc-dfh5c [940.885375ms] Sep 9 18:20:55.337: INFO: Created: latency-svc-dlx86 Sep 9 18:20:55.350: INFO: Got endpoints: latency-svc-dlx86 [903.950471ms] Sep 9 18:20:55.403: INFO: Created: latency-svc-dv2rt Sep 9 18:20:55.533: INFO: Got endpoints: latency-svc-dv2rt [1.037295747s] Sep 9 18:20:55.536: INFO: Created: latency-svc-4rhrw Sep 9 18:20:55.554: INFO: Got endpoints: latency-svc-4rhrw [979.14172ms] Sep 9 18:20:55.577: INFO: Created: latency-svc-wswx6 Sep 9 18:20:55.613: INFO: Got endpoints: latency-svc-wswx6 [1.014318511s] Sep 9 18:20:55.670: INFO: Created: latency-svc-cdhpc Sep 9 18:20:55.673: INFO: Got endpoints: latency-svc-cdhpc [1.014915704s] Sep 9 18:20:55.704: INFO: Created: latency-svc-ptvbm Sep 9 18:20:55.717: INFO: Got endpoints: latency-svc-ptvbm [983.4811ms] Sep 9 18:20:55.740: INFO: Created: latency-svc-lz85q Sep 9 18:20:55.753: INFO: Got endpoints: latency-svc-lz85q [980.492691ms] Sep 9 18:20:55.815: INFO: Created: latency-svc-trcw9 Sep 9 18:20:55.819: INFO: Got endpoints: latency-svc-trcw9 [956.485884ms] Sep 9 18:20:55.853: INFO: Created: latency-svc-8jd97 Sep 9 18:20:55.874: INFO: Got endpoints: latency-svc-8jd97 [973.579249ms] Sep 9 18:20:55.900: INFO: Created: latency-svc-h9frh Sep 9 18:20:55.970: INFO: Got endpoints: latency-svc-h9frh [1.016410414s] Sep 9 18:20:55.979: INFO: Created: latency-svc-z76zk Sep 9 18:20:55.994: INFO: Got endpoints: latency-svc-z76zk [956.754673ms] Sep 9 18:20:56.015: INFO: Created: latency-svc-wgvjz Sep 9 18:20:56.030: INFO: Got endpoints: latency-svc-wgvjz [944.811661ms] Sep 9 18:20:56.051: INFO: Created: latency-svc-rwtp5 Sep 9 18:20:56.067: INFO: Got endpoints: latency-svc-rwtp5 [893.137231ms] Sep 9 18:20:56.114: INFO: Created: latency-svc-ftl4s Sep 9 18:20:56.133: INFO: Got endpoints: latency-svc-ftl4s [915.119045ms] Sep 9 18:20:56.177: INFO: Created: latency-svc-kkzj8 Sep 9 18:20:56.211: INFO: Got endpoints: latency-svc-kkzj8 [876.585276ms] Sep 9 18:20:56.267: INFO: Created: latency-svc-6d85d Sep 9 18:20:56.314: INFO: Got endpoints: latency-svc-6d85d [964.349859ms] Sep 9 18:20:56.351: INFO: Created: latency-svc-b59rn Sep 9 18:20:56.419: INFO: Got endpoints: latency-svc-b59rn [885.703891ms] Sep 9 18:20:56.422: INFO: Created: latency-svc-9tcvz Sep 9 18:20:56.433: INFO: Got endpoints: latency-svc-9tcvz [879.128073ms] Sep 9 18:20:56.459: INFO: Created: latency-svc-rbvcj Sep 9 18:20:56.476: INFO: Got endpoints: latency-svc-rbvcj [863.165308ms] Sep 9 18:20:56.495: INFO: Created: latency-svc-qmccx Sep 9 18:20:56.518: INFO: Got endpoints: latency-svc-qmccx [844.735109ms] Sep 9 18:20:56.598: INFO: Created: latency-svc-xvsrc Sep 9 18:20:56.608: INFO: Got endpoints: latency-svc-xvsrc [891.228863ms] Sep 9 18:20:56.638: INFO: Created: latency-svc-wsfw7 Sep 9 18:20:56.657: INFO: Got endpoints: latency-svc-wsfw7 [903.485579ms] Sep 9 18:20:56.749: INFO: Created: latency-svc-jtwt4 Sep 9 18:20:56.770: INFO: Got endpoints: latency-svc-jtwt4 [951.137679ms] Sep 9 18:20:56.807: INFO: Created: latency-svc-9dv5z Sep 9 18:20:56.825: INFO: Got endpoints: latency-svc-9dv5z [951.371531ms] Sep 9 18:20:56.848: INFO: Created: latency-svc-q8d6c Sep 9 18:20:56.886: INFO: Got endpoints: latency-svc-q8d6c [916.324592ms] Sep 9 18:20:56.914: INFO: Created: latency-svc-rdkvj Sep 9 18:20:56.927: INFO: Got endpoints: latency-svc-rdkvj [932.975894ms] Sep 9 18:20:56.956: INFO: Created: latency-svc-nvljc Sep 9 18:20:56.969: INFO: Got endpoints: latency-svc-nvljc [938.754375ms] Sep 9 18:20:57.024: INFO: Created: latency-svc-6lblm Sep 9 18:20:57.027: INFO: Got endpoints: latency-svc-6lblm [960.485873ms] Sep 9 18:20:57.058: INFO: Created: latency-svc-gvt4v Sep 9 18:20:57.100: INFO: Got endpoints: latency-svc-gvt4v [967.02087ms] Sep 9 18:20:57.269: INFO: Created: latency-svc-b7ftb Sep 9 18:20:57.274: INFO: Got endpoints: latency-svc-b7ftb [1.062588233s] Sep 9 18:20:57.576: INFO: Created: latency-svc-b8tv9 Sep 9 18:20:57.578: INFO: Got endpoints: latency-svc-b8tv9 [1.26390465s] Sep 9 18:20:57.628: INFO: Created: latency-svc-m7ths Sep 9 18:20:57.642: INFO: Got endpoints: latency-svc-m7ths [1.222739822s] Sep 9 18:20:57.663: INFO: Created: latency-svc-jrrq2 Sep 9 18:20:57.717: INFO: Got endpoints: latency-svc-jrrq2 [1.283958496s] Sep 9 18:20:57.738: INFO: Created: latency-svc-gtbjj Sep 9 18:20:57.762: INFO: Got endpoints: latency-svc-gtbjj [1.286386473s] Sep 9 18:20:57.783: INFO: Created: latency-svc-57smv Sep 9 18:20:57.798: INFO: Got endpoints: latency-svc-57smv [1.280041421s] Sep 9 18:20:57.898: INFO: Created: latency-svc-wvlmp Sep 9 18:20:57.902: INFO: Got endpoints: latency-svc-wvlmp [1.294199561s] Sep 9 18:20:57.933: INFO: Created: latency-svc-drdfv Sep 9 18:20:57.957: INFO: Got endpoints: latency-svc-drdfv [1.299733529s] Sep 9 18:20:57.987: INFO: Created: latency-svc-rkz4q Sep 9 18:20:57.997: INFO: Got endpoints: latency-svc-rkz4q [1.226919518s] Sep 9 18:20:58.053: INFO: Created: latency-svc-4jkv4 Sep 9 18:20:58.061: INFO: Got endpoints: latency-svc-4jkv4 [1.23599023s] Sep 9 18:20:58.088: INFO: Created: latency-svc-kzvsj Sep 9 18:20:58.099: INFO: Got endpoints: latency-svc-kzvsj [1.21308999s] Sep 9 18:20:58.119: INFO: Created: latency-svc-srxcj Sep 9 18:20:58.149: INFO: Got endpoints: latency-svc-srxcj [1.221925643s] Sep 9 18:20:58.216: INFO: Created: latency-svc-9x99h Sep 9 18:20:58.226: INFO: Got endpoints: latency-svc-9x99h [1.25708021s] Sep 9 18:20:58.251: INFO: Created: latency-svc-b9v8h Sep 9 18:20:58.262: INFO: Got endpoints: latency-svc-b9v8h [1.234983604s] Sep 9 18:20:58.287: INFO: Created: latency-svc-zrq9c Sep 9 18:20:58.299: INFO: Got endpoints: latency-svc-zrq9c [1.198566225s] Sep 9 18:20:58.359: INFO: Created: latency-svc-xh9fx Sep 9 18:20:58.377: INFO: Got endpoints: latency-svc-xh9fx [1.103173283s] Sep 9 18:20:58.402: INFO: Created: latency-svc-7qggg Sep 9 18:20:58.407: INFO: Got endpoints: latency-svc-7qggg [828.265434ms] Sep 9 18:20:58.431: INFO: Created: latency-svc-l28x8 Sep 9 18:20:58.437: INFO: Got endpoints: latency-svc-l28x8 [795.079069ms] Sep 9 18:20:58.510: INFO: Created: latency-svc-8cpf8 Sep 9 18:20:58.513: INFO: Got endpoints: latency-svc-8cpf8 [795.181275ms] Sep 9 18:20:58.587: INFO: Created: latency-svc-b7wn4 Sep 9 18:20:58.600: INFO: Got endpoints: latency-svc-b7wn4 [837.33774ms] Sep 9 18:20:58.652: INFO: Created: latency-svc-89rlw Sep 9 18:20:58.660: INFO: Got endpoints: latency-svc-89rlw [861.408248ms] Sep 9 18:20:58.689: INFO: Created: latency-svc-j6vqx Sep 9 18:20:58.719: INFO: Got endpoints: latency-svc-j6vqx [816.184469ms] Sep 9 18:20:58.750: INFO: Created: latency-svc-s9qt5 Sep 9 18:20:58.809: INFO: Got endpoints: latency-svc-s9qt5 [851.832363ms] Sep 9 18:20:58.811: INFO: Created: latency-svc-9kjm4 Sep 9 18:20:58.829: INFO: Got endpoints: latency-svc-9kjm4 [831.672586ms] Sep 9 18:20:58.852: INFO: Created: latency-svc-m9hh2 Sep 9 18:20:58.868: INFO: Got endpoints: latency-svc-m9hh2 [806.952192ms] Sep 9 18:20:58.899: INFO: Created: latency-svc-f7p9q Sep 9 18:20:58.907: INFO: Got endpoints: latency-svc-f7p9q [807.486472ms] Sep 9 18:20:58.965: INFO: Created: latency-svc-x6f9p Sep 9 18:20:58.997: INFO: Got endpoints: latency-svc-x6f9p [848.130506ms] Sep 9 18:20:59.054: INFO: Created: latency-svc-p7nhj Sep 9 18:20:59.126: INFO: Got endpoints: latency-svc-p7nhj [899.53291ms] Sep 9 18:20:59.128: INFO: Created: latency-svc-6qjtq Sep 9 18:20:59.144: INFO: Got endpoints: latency-svc-6qjtq [881.612706ms] Sep 9 18:20:59.175: INFO: Created: latency-svc-489xm Sep 9 18:20:59.196: INFO: Got endpoints: latency-svc-489xm [897.292704ms] Sep 9 18:20:59.286: INFO: Created: latency-svc-df5zr Sep 9 18:20:59.286: INFO: Got endpoints: latency-svc-df5zr [909.136638ms] Sep 9 18:20:59.330: INFO: Created: latency-svc-222br Sep 9 18:20:59.347: INFO: Got endpoints: latency-svc-222br [939.848581ms] Sep 9 18:20:59.366: INFO: Created: latency-svc-cjsrk Sep 9 18:20:59.413: INFO: Got endpoints: latency-svc-cjsrk [976.03303ms] Sep 9 18:20:59.420: INFO: Created: latency-svc-pnwlw Sep 9 18:20:59.437: INFO: Got endpoints: latency-svc-pnwlw [924.294103ms] Sep 9 18:20:59.463: INFO: Created: latency-svc-nxmcl Sep 9 18:20:59.479: INFO: Got endpoints: latency-svc-nxmcl [879.465955ms] Sep 9 18:20:59.505: INFO: Created: latency-svc-q2dzk Sep 9 18:20:59.567: INFO: Got endpoints: latency-svc-q2dzk [907.638398ms] Sep 9 18:20:59.569: INFO: Created: latency-svc-zx97g Sep 9 18:20:59.575: INFO: Got endpoints: latency-svc-zx97g [856.364211ms] Sep 9 18:20:59.601: INFO: Created: latency-svc-f7z65 Sep 9 18:20:59.618: INFO: Got endpoints: latency-svc-f7z65 [809.061237ms] Sep 9 18:20:59.649: INFO: Created: latency-svc-jvcpg Sep 9 18:20:59.707: INFO: Got endpoints: latency-svc-jvcpg [878.081674ms] Sep 9 18:20:59.708: INFO: Created: latency-svc-hdb8j Sep 9 18:20:59.732: INFO: Got endpoints: latency-svc-hdb8j [864.238115ms] Sep 9 18:20:59.787: INFO: Created: latency-svc-b2kk5 Sep 9 18:20:59.805: INFO: Got endpoints: latency-svc-b2kk5 [897.868129ms] Sep 9 18:20:59.856: INFO: Created: latency-svc-hm227 Sep 9 18:20:59.864: INFO: Got endpoints: latency-svc-hm227 [866.918539ms] Sep 9 18:20:59.888: INFO: Created: latency-svc-45xnn Sep 9 18:20:59.907: INFO: Got endpoints: latency-svc-45xnn [780.891204ms] Sep 9 18:20:59.937: INFO: Created: latency-svc-dtr6b Sep 9 18:20:59.954: INFO: Got endpoints: latency-svc-dtr6b [810.04081ms] Sep 9 18:21:00.006: INFO: Created: latency-svc-hs9lk Sep 9 18:21:00.009: INFO: Got endpoints: latency-svc-hs9lk [812.437279ms] Sep 9 18:21:00.039: INFO: Created: latency-svc-pdpmh Sep 9 18:21:00.056: INFO: Got endpoints: latency-svc-pdpmh [770.005231ms] Sep 9 18:21:00.080: INFO: Created: latency-svc-svk5h Sep 9 18:21:00.104: INFO: Got endpoints: latency-svc-svk5h [757.199806ms] Sep 9 18:21:00.173: INFO: Created: latency-svc-2shrm Sep 9 18:21:00.183: INFO: Got endpoints: latency-svc-2shrm [769.419963ms] Sep 9 18:21:00.207: INFO: Created: latency-svc-4b6zk Sep 9 18:21:00.219: INFO: Got endpoints: latency-svc-4b6zk [781.720005ms] Sep 9 18:21:00.248: INFO: Created: latency-svc-vg7wj Sep 9 18:21:00.261: INFO: Got endpoints: latency-svc-vg7wj [782.010901ms] Sep 9 18:21:00.330: INFO: Created: latency-svc-v8j85 Sep 9 18:21:00.344: INFO: Got endpoints: latency-svc-v8j85 [776.199969ms] Sep 9 18:21:00.374: INFO: Created: latency-svc-jh9bk Sep 9 18:21:00.388: INFO: Got endpoints: latency-svc-jh9bk [812.508775ms] Sep 9 18:21:00.411: INFO: Created: latency-svc-l45jd Sep 9 18:21:00.424: INFO: Got endpoints: latency-svc-l45jd [805.991256ms] Sep 9 18:21:00.478: INFO: Created: latency-svc-gnk78 Sep 9 18:21:00.485: INFO: Got endpoints: latency-svc-gnk78 [777.804253ms] Sep 9 18:21:00.518: INFO: Created: latency-svc-wp52f Sep 9 18:21:00.539: INFO: Got endpoints: latency-svc-wp52f [806.175349ms] Sep 9 18:21:00.566: INFO: Created: latency-svc-2ppks Sep 9 18:21:00.641: INFO: Got endpoints: latency-svc-2ppks [836.521165ms] Sep 9 18:21:00.644: INFO: Created: latency-svc-tm6xg Sep 9 18:21:00.647: INFO: Got endpoints: latency-svc-tm6xg [782.838808ms] Sep 9 18:21:00.674: INFO: Created: latency-svc-b7fzs Sep 9 18:21:00.683: INFO: Got endpoints: latency-svc-b7fzs [775.783349ms] Sep 9 18:21:00.704: INFO: Created: latency-svc-286kw Sep 9 18:21:00.713: INFO: Got endpoints: latency-svc-286kw [758.93689ms] Sep 9 18:21:00.734: INFO: Created: latency-svc-5qm78 Sep 9 18:21:00.795: INFO: Got endpoints: latency-svc-5qm78 [786.422408ms] Sep 9 18:21:00.800: INFO: Created: latency-svc-fhv9l Sep 9 18:21:00.842: INFO: Got endpoints: latency-svc-fhv9l [785.81832ms] Sep 9 18:21:00.884: INFO: Created: latency-svc-hf8vp Sep 9 18:21:00.894: INFO: Got endpoints: latency-svc-hf8vp [790.269824ms] Sep 9 18:21:00.965: INFO: Created: latency-svc-5rnth Sep 9 18:21:00.968: INFO: Got endpoints: latency-svc-5rnth [785.030788ms] Sep 9 18:21:00.997: INFO: Created: latency-svc-p2n5k Sep 9 18:21:01.039: INFO: Got endpoints: latency-svc-p2n5k [820.555398ms] Sep 9 18:21:01.107: INFO: Created: latency-svc-4dkv5 Sep 9 18:21:01.110: INFO: Got endpoints: latency-svc-4dkv5 [848.750686ms] Sep 9 18:21:01.142: INFO: Created: latency-svc-6g45w Sep 9 18:21:01.159: INFO: Got endpoints: latency-svc-6g45w [815.276201ms] Sep 9 18:21:01.184: INFO: Created: latency-svc-gg7xq Sep 9 18:21:01.195: INFO: Got endpoints: latency-svc-gg7xq [806.980171ms] Sep 9 18:21:01.282: INFO: Created: latency-svc-5tmqd Sep 9 18:21:01.285: INFO: Got endpoints: latency-svc-5tmqd [861.166426ms] Sep 9 18:21:01.334: INFO: Created: latency-svc-42gfp Sep 9 18:21:01.351: INFO: Got endpoints: latency-svc-42gfp [866.555612ms] Sep 9 18:21:01.381: INFO: Created: latency-svc-tms6w Sep 9 18:21:01.454: INFO: Got endpoints: latency-svc-tms6w [915.430674ms] Sep 9 18:21:01.457: INFO: Created: latency-svc-rt98q Sep 9 18:21:01.465: INFO: Got endpoints: latency-svc-rt98q [824.130863ms] Sep 9 18:21:01.490: INFO: Created: latency-svc-qlzdh Sep 9 18:21:01.502: INFO: Got endpoints: latency-svc-qlzdh [854.796117ms] Sep 9 18:21:01.538: INFO: Created: latency-svc-4cjm9 Sep 9 18:21:01.550: INFO: Got endpoints: latency-svc-4cjm9 [867.035339ms] Sep 9 18:21:01.606: INFO: Created: latency-svc-wt6h9 Sep 9 18:21:01.610: INFO: Got endpoints: latency-svc-wt6h9 [897.350194ms] Sep 9 18:21:01.652: INFO: Created: latency-svc-xpmjg Sep 9 18:21:01.677: INFO: Got endpoints: latency-svc-xpmjg [881.555291ms] Sep 9 18:21:01.778: INFO: Created: latency-svc-hbzh5 Sep 9 18:21:01.782: INFO: Got endpoints: latency-svc-hbzh5 [939.232979ms] Sep 9 18:21:01.813: INFO: Created: latency-svc-gqrlv Sep 9 18:21:01.827: INFO: Got endpoints: latency-svc-gqrlv [933.007629ms] Sep 9 18:21:01.850: INFO: Created: latency-svc-tgxr4 Sep 9 18:21:01.875: INFO: Got endpoints: latency-svc-tgxr4 [907.704843ms] Sep 9 18:21:01.930: INFO: Created: latency-svc-npxsm Sep 9 18:21:01.957: INFO: Got endpoints: latency-svc-npxsm [917.420701ms] Sep 9 18:21:01.987: INFO: Created: latency-svc-5qjvf Sep 9 18:21:01.997: INFO: Got endpoints: latency-svc-5qjvf [886.870183ms] Sep 9 18:21:02.071: INFO: Created: latency-svc-ktst9 Sep 9 18:21:02.073: INFO: Got endpoints: latency-svc-ktst9 [116.254454ms] Sep 9 18:21:02.101: INFO: Created: latency-svc-slbnk Sep 9 18:21:02.116: INFO: Got endpoints: latency-svc-slbnk [956.899121ms] Sep 9 18:21:02.143: INFO: Created: latency-svc-cftt8 Sep 9 18:21:02.234: INFO: Got endpoints: latency-svc-cftt8 [1.038733298s] Sep 9 18:21:02.235: INFO: Created: latency-svc-xdt2p Sep 9 18:21:02.248: INFO: Got endpoints: latency-svc-xdt2p [963.490479ms] Sep 9 18:21:02.281: INFO: Created: latency-svc-r62qz Sep 9 18:21:02.290: INFO: Got endpoints: latency-svc-r62qz [939.231664ms] Sep 9 18:21:02.311: INFO: Created: latency-svc-ntmvg Sep 9 18:21:02.321: INFO: Got endpoints: latency-svc-ntmvg [866.788595ms] Sep 9 18:21:02.383: INFO: Created: latency-svc-9f7fv Sep 9 18:21:02.386: INFO: Got endpoints: latency-svc-9f7fv [920.264852ms] Sep 9 18:21:02.418: INFO: Created: latency-svc-tdc5q Sep 9 18:21:02.435: INFO: Got endpoints: latency-svc-tdc5q [933.060558ms] Sep 9 18:21:02.461: INFO: Created: latency-svc-wr2kj Sep 9 18:21:02.478: INFO: Got endpoints: latency-svc-wr2kj [927.684044ms] Sep 9 18:21:02.557: INFO: Created: latency-svc-c9x8c Sep 9 18:21:02.560: INFO: Got endpoints: latency-svc-c9x8c [950.061277ms] Sep 9 18:21:02.617: INFO: Created: latency-svc-vqb99 Sep 9 18:21:02.628: INFO: Got endpoints: latency-svc-vqb99 [950.848599ms] Sep 9 18:21:02.724: INFO: Created: latency-svc-bwchn Sep 9 18:21:02.730: INFO: Got endpoints: latency-svc-bwchn [948.317843ms] Sep 9 18:21:02.755: INFO: Created: latency-svc-hjlps Sep 9 18:21:02.784: INFO: Got endpoints: latency-svc-hjlps [957.232746ms] Sep 9 18:21:02.815: INFO: Created: latency-svc-k598b Sep 9 18:21:02.868: INFO: Got endpoints: latency-svc-k598b [992.741487ms] Sep 9 18:21:02.871: INFO: Created: latency-svc-bfzgx Sep 9 18:21:02.881: INFO: Got endpoints: latency-svc-bfzgx [883.569857ms] Sep 9 18:21:02.905: INFO: Created: latency-svc-2hnrk Sep 9 18:21:02.924: INFO: Got endpoints: latency-svc-2hnrk [850.2291ms] Sep 9 18:21:02.946: INFO: Created: latency-svc-kpqmq Sep 9 18:21:02.965: INFO: Got endpoints: latency-svc-kpqmq [848.756948ms] Sep 9 18:21:03.041: INFO: Created: latency-svc-2kddv Sep 9 18:21:03.073: INFO: Got endpoints: latency-svc-2kddv [839.050025ms] Sep 9 18:21:03.192: INFO: Created: latency-svc-pqbsb Sep 9 18:21:03.194: INFO: Got endpoints: latency-svc-pqbsb [945.828357ms] Sep 9 18:21:03.223: INFO: Created: latency-svc-xd95t Sep 9 18:21:03.237: INFO: Got endpoints: latency-svc-xd95t [946.896807ms] Sep 9 18:21:03.283: INFO: Created: latency-svc-pbf5x Sep 9 18:21:03.352: INFO: Got endpoints: latency-svc-pbf5x [1.031271379s] Sep 9 18:21:03.366: INFO: Created: latency-svc-s2d6m Sep 9 18:21:03.388: INFO: Got endpoints: latency-svc-s2d6m [1.001939871s] Sep 9 18:21:03.409: INFO: Created: latency-svc-9987s Sep 9 18:21:03.424: INFO: Got endpoints: latency-svc-9987s [988.415671ms] Sep 9 18:21:03.450: INFO: Created: latency-svc-d47fz Sep 9 18:21:03.509: INFO: Got endpoints: latency-svc-d47fz [1.031683029s] Sep 9 18:21:03.528: INFO: Created: latency-svc-z2c9f Sep 9 18:21:03.545: INFO: Got endpoints: latency-svc-z2c9f [984.807492ms] Sep 9 18:21:03.576: INFO: Created: latency-svc-fvmzt Sep 9 18:21:03.587: INFO: Got endpoints: latency-svc-fvmzt [958.934502ms] Sep 9 18:21:03.652: INFO: Created: latency-svc-rcxvg Sep 9 18:21:03.654: INFO: Got endpoints: latency-svc-rcxvg [924.490483ms] Sep 9 18:21:03.685: INFO: Created: latency-svc-r48ts Sep 9 18:21:03.701: INFO: Got endpoints: latency-svc-r48ts [916.544591ms] Sep 9 18:21:03.726: INFO: Created: latency-svc-lzm2b Sep 9 18:21:03.833: INFO: Got endpoints: latency-svc-lzm2b [964.344464ms] Sep 9 18:21:03.834: INFO: Created: latency-svc-vxw4v Sep 9 18:21:03.845: INFO: Got endpoints: latency-svc-vxw4v [964.502834ms] Sep 9 18:21:03.882: INFO: Created: latency-svc-82xzw Sep 9 18:21:03.894: INFO: Got endpoints: latency-svc-82xzw [969.818095ms] Sep 9 18:21:03.918: INFO: Created: latency-svc-k8jrk Sep 9 18:21:03.993: INFO: Got endpoints: latency-svc-k8jrk [1.027811358s] Sep 9 18:21:03.995: INFO: Created: latency-svc-txdvc Sep 9 18:21:04.002: INFO: Got endpoints: latency-svc-txdvc [929.118506ms] Sep 9 18:21:04.032: INFO: Created: latency-svc-p9h5p Sep 9 18:21:04.050: INFO: Got endpoints: latency-svc-p9h5p [855.969652ms] Sep 9 18:21:04.074: INFO: Created: latency-svc-zf56b Sep 9 18:21:04.087: INFO: Got endpoints: latency-svc-zf56b [849.079287ms] Sep 9 18:21:04.144: INFO: Created: latency-svc-h8rp6 Sep 9 18:21:04.157: INFO: Got endpoints: latency-svc-h8rp6 [804.393495ms] Sep 9 18:21:04.182: INFO: Created: latency-svc-dczqb Sep 9 18:21:04.196: INFO: Got endpoints: latency-svc-dczqb [807.883845ms] Sep 9 18:21:04.237: INFO: Created: latency-svc-hpbht Sep 9 18:21:04.281: INFO: Got endpoints: latency-svc-hpbht [857.128457ms] Sep 9 18:21:04.362: INFO: Created: latency-svc-lqz2s Sep 9 18:21:04.431: INFO: Created: latency-svc-gr6sf Sep 9 18:21:04.463: INFO: Got endpoints: latency-svc-lqz2s [954.090773ms] Sep 9 18:21:04.466: INFO: Created: latency-svc-pfs4m Sep 9 18:21:04.488: INFO: Got endpoints: latency-svc-pfs4m [901.666388ms] Sep 9 18:21:04.524: INFO: Got endpoints: latency-svc-gr6sf [978.767519ms] Sep 9 18:21:04.525: INFO: Created: latency-svc-g2dnz Sep 9 18:21:04.598: INFO: Got endpoints: latency-svc-g2dnz [943.349061ms] Sep 9 18:21:04.667: INFO: Created: latency-svc-vcvp6 Sep 9 18:21:04.731: INFO: Got endpoints: latency-svc-vcvp6 [1.029803144s] Sep 9 18:21:04.764: INFO: Created: latency-svc-ndlfh Sep 9 18:21:04.783: INFO: Got endpoints: latency-svc-ndlfh [949.986104ms] Sep 9 18:21:04.807: INFO: Created: latency-svc-v59hr Sep 9 18:21:04.879: INFO: Got endpoints: latency-svc-v59hr [1.033803405s] Sep 9 18:21:04.883: INFO: Created: latency-svc-89zln Sep 9 18:21:04.915: INFO: Got endpoints: latency-svc-89zln [1.021466213s] Sep 9 18:21:04.949: INFO: Created: latency-svc-rgjmt Sep 9 18:21:05.036: INFO: Got endpoints: latency-svc-rgjmt [1.043013522s] Sep 9 18:21:05.042: INFO: Created: latency-svc-gm8hf Sep 9 18:21:05.070: INFO: Got endpoints: latency-svc-gm8hf [1.067873829s] Sep 9 18:21:05.106: INFO: Created: latency-svc-s7mln Sep 9 18:21:05.120: INFO: Got endpoints: latency-svc-s7mln [1.069084026s] Sep 9 18:21:05.197: INFO: Created: latency-svc-mwfmg Sep 9 18:21:05.219: INFO: Got endpoints: latency-svc-mwfmg [1.131966289s] Sep 9 18:21:05.219: INFO: Created: latency-svc-xrf4k Sep 9 18:21:05.240: INFO: Got endpoints: latency-svc-xrf4k [1.083117716s] Sep 9 18:21:05.267: INFO: Created: latency-svc-5tpsd Sep 9 18:21:05.276: INFO: Got endpoints: latency-svc-5tpsd [1.080388752s] Sep 9 18:21:05.349: INFO: Created: latency-svc-fmp2q Sep 9 18:21:05.352: INFO: Got endpoints: latency-svc-fmp2q [1.070751659s] Sep 9 18:21:05.381: INFO: Created: latency-svc-n5bjq Sep 9 18:21:05.397: INFO: Got endpoints: latency-svc-n5bjq [933.326068ms] Sep 9 18:21:05.424: INFO: Created: latency-svc-p8t9c Sep 9 18:21:05.445: INFO: Got endpoints: latency-svc-p8t9c [956.613615ms] Sep 9 18:21:05.496: INFO: Created: latency-svc-w4krd Sep 9 18:21:05.499: INFO: Got endpoints: latency-svc-w4krd [974.681224ms] Sep 9 18:21:05.532: INFO: Created: latency-svc-729pn Sep 9 18:21:05.541: INFO: Got endpoints: latency-svc-729pn [943.219111ms] Sep 9 18:21:05.573: INFO: Created: latency-svc-588wz Sep 9 18:21:05.590: INFO: Got endpoints: latency-svc-588wz [858.741853ms] Sep 9 18:21:05.641: INFO: Created: latency-svc-kdgbd Sep 9 18:21:05.663: INFO: Got endpoints: latency-svc-kdgbd [880.245281ms] Sep 9 18:21:05.694: INFO: Created: latency-svc-hswqg Sep 9 18:21:05.710: INFO: Got endpoints: latency-svc-hswqg [831.289358ms] Sep 9 18:21:05.736: INFO: Created: latency-svc-5f8zg Sep 9 18:21:05.784: INFO: Got endpoints: latency-svc-5f8zg [868.700543ms] Sep 9 18:21:05.807: INFO: Created: latency-svc-862g2 Sep 9 18:21:05.837: INFO: Got endpoints: latency-svc-862g2 [800.60643ms] Sep 9 18:21:05.855: INFO: Created: latency-svc-tjt98 Sep 9 18:21:05.879: INFO: Got endpoints: latency-svc-tjt98 [808.852293ms] Sep 9 18:21:05.971: INFO: Created: latency-svc-dh95n Sep 9 18:21:05.981: INFO: Got endpoints: latency-svc-dh95n [861.626213ms] Sep 9 18:21:06.005: INFO: Created: latency-svc-kshjk Sep 9 18:21:06.023: INFO: Got endpoints: latency-svc-kshjk [804.528734ms] Sep 9 18:21:06.050: INFO: Created: latency-svc-6954z Sep 9 18:21:06.059: INFO: Got endpoints: latency-svc-6954z [819.190284ms] Sep 9 18:21:06.107: INFO: Created: latency-svc-nwdmv Sep 9 18:21:06.110: INFO: Got endpoints: latency-svc-nwdmv [833.323534ms] Sep 9 18:21:06.139: INFO: Created: latency-svc-kzdv5 Sep 9 18:21:06.158: INFO: Got endpoints: latency-svc-kzdv5 [806.364606ms] Sep 9 18:21:06.191: INFO: Created: latency-svc-p55xc Sep 9 18:21:06.288: INFO: Got endpoints: latency-svc-p55xc [890.595415ms] Sep 9 18:21:06.302: INFO: Created: latency-svc-6x8lt Sep 9 18:21:06.317: INFO: Got endpoints: latency-svc-6x8lt [871.498402ms] Sep 9 18:21:06.370: INFO: Created: latency-svc-4rqbv Sep 9 18:21:06.378: INFO: Got endpoints: latency-svc-4rqbv [879.449948ms] Sep 9 18:21:06.424: INFO: Created: latency-svc-vnn62 Sep 9 18:21:06.427: INFO: Got endpoints: latency-svc-vnn62 [885.713948ms] Sep 9 18:21:06.455: INFO: Created: latency-svc-4gv9v Sep 9 18:21:06.469: INFO: Got endpoints: latency-svc-4gv9v [879.448281ms] Sep 9 18:21:06.497: INFO: Created: latency-svc-smg2f Sep 9 18:21:06.575: INFO: Got endpoints: latency-svc-smg2f [911.783244ms] Sep 9 18:21:06.575: INFO: Latencies: [72.416365ms 114.183791ms 116.254454ms 166.376419ms 216.19507ms 295.305048ms 318.514699ms 379.024212ms 453.643881ms 492.976502ms 582.460644ms 620.231626ms 673.4442ms 757.199806ms 757.612458ms 758.93689ms 769.419963ms 770.005231ms 775.783349ms 776.199969ms 777.804253ms 780.891204ms 781.720005ms 782.010901ms 782.838808ms 785.030788ms 785.81832ms 786.422408ms 790.269824ms 795.079069ms 795.181275ms 800.60643ms 804.393495ms 804.528734ms 805.796701ms 805.991256ms 806.175349ms 806.364606ms 806.952192ms 806.980171ms 807.486472ms 807.883845ms 808.852293ms 809.061237ms 810.04081ms 812.437279ms 812.508775ms 815.276201ms 816.184469ms 819.190284ms 820.555398ms 824.130863ms 828.265434ms 831.289358ms 831.672586ms 833.323534ms 836.521165ms 837.33774ms 839.050025ms 844.735109ms 848.130506ms 848.750686ms 848.756948ms 849.079287ms 850.2291ms 851.832363ms 854.796117ms 855.969652ms 856.364211ms 857.128457ms 858.741853ms 861.166426ms 861.408248ms 861.626213ms 863.165308ms 864.238115ms 865.870762ms 866.555612ms 866.788595ms 866.918539ms 867.035339ms 868.700543ms 871.498402ms 876.585276ms 878.081674ms 879.128073ms 879.448281ms 879.449948ms 879.465955ms 880.245281ms 881.555291ms 881.612706ms 883.569857ms 885.703891ms 885.713948ms 886.870183ms 890.595415ms 891.228863ms 893.137231ms 893.857654ms 897.292704ms 897.350194ms 897.868129ms 899.53291ms 901.666388ms 903.485579ms 903.950471ms 907.638398ms 907.704843ms 909.136638ms 911.783244ms 915.119045ms 915.430674ms 916.324592ms 916.544591ms 917.420701ms 920.264852ms 924.294103ms 924.490483ms 927.684044ms 929.118506ms 932.975894ms 933.007629ms 933.060558ms 933.326068ms 938.754375ms 939.231664ms 939.232979ms 939.848581ms 940.885375ms 943.219111ms 943.349061ms 944.811661ms 945.828357ms 946.896807ms 948.317843ms 949.986104ms 950.061277ms 950.848599ms 951.137679ms 951.371531ms 954.090773ms 956.485884ms 956.613615ms 956.754673ms 956.899121ms 957.232746ms 958.934502ms 960.485873ms 963.490479ms 964.344464ms 964.349859ms 964.502834ms 967.02087ms 969.818095ms 973.579249ms 974.681224ms 976.03303ms 978.767519ms 979.14172ms 980.492691ms 983.4811ms 984.807492ms 988.415671ms 992.741487ms 1.001939871s 1.014318511s 1.014915704s 1.016410414s 1.021466213s 1.027811358s 1.029803144s 1.031271379s 1.031683029s 1.033803405s 1.037295747s 1.038733298s 1.043013522s 1.062588233s 1.067873829s 1.069084026s 1.070751659s 1.080388752s 1.083117716s 1.103173283s 1.131966289s 1.198566225s 1.21308999s 1.221925643s 1.222739822s 1.226919518s 1.234983604s 1.23599023s 1.25708021s 1.26390465s 1.280041421s 1.283958496s 1.286386473s 1.294199561s 1.299733529s] Sep 9 18:21:06.575: INFO: 50 %ile: 897.292704ms Sep 9 18:21:06.575: INFO: 90 %ile: 1.069084026s Sep 9 18:21:06.575: INFO: 99 %ile: 1.294199561s Sep 9 18:21:06.575: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:21:06.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-plkjh" for this suite. Sep 9 18:21:30.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:21:30.611: INFO: namespace: e2e-tests-svc-latency-plkjh, resource: bindings, ignored listing per whitelist Sep 9 18:21:30.677: INFO: namespace e2e-tests-svc-latency-plkjh deletion completed in 24.099468074s • [SLOW TEST:39.761 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:21:30.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-488b682c-f2c9-11ea-88c2-0242ac110007 STEP: Creating a pod to test consume configMaps Sep 9 18:21:30.790: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-488ddf12-f2c9-11ea-88c2-0242ac110007" in namespace "e2e-tests-projected-rrffx" to be "success or failure" Sep 9 18:21:30.795: INFO: Pod "pod-projected-configmaps-488ddf12-f2c9-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.304026ms Sep 9 18:21:32.799: INFO: Pod "pod-projected-configmaps-488ddf12-f2c9-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008737922s Sep 9 18:21:34.803: INFO: Pod "pod-projected-configmaps-488ddf12-f2c9-11ea-88c2-0242ac110007": Phase="Running", Reason="", readiness=true. Elapsed: 4.012297047s Sep 9 18:21:36.807: INFO: Pod "pod-projected-configmaps-488ddf12-f2c9-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016841829s STEP: Saw pod success Sep 9 18:21:36.807: INFO: Pod "pod-projected-configmaps-488ddf12-f2c9-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 18:21:36.810: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-488ddf12-f2c9-11ea-88c2-0242ac110007 container projected-configmap-volume-test: STEP: delete the pod Sep 9 18:21:36.832: INFO: Waiting for pod pod-projected-configmaps-488ddf12-f2c9-11ea-88c2-0242ac110007 to disappear Sep 9 18:21:36.850: INFO: Pod pod-projected-configmaps-488ddf12-f2c9-11ea-88c2-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:21:36.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-rrffx" for this suite. Sep 9 18:21:42.864: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:21:42.923: INFO: namespace: e2e-tests-projected-rrffx, resource: bindings, ignored listing per whitelist Sep 9 18:21:42.935: INFO: namespace e2e-tests-projected-rrffx deletion completed in 6.080827692s • [SLOW TEST:12.257 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:21:42.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Sep 9 18:21:43.820: INFO: Pod name wrapped-volume-race-50475ce1-f2c9-11ea-88c2-0242ac110007: Found 0 pods out of 5 Sep 9 18:21:48.826: INFO: Pod name wrapped-volume-race-50475ce1-f2c9-11ea-88c2-0242ac110007: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-50475ce1-f2c9-11ea-88c2-0242ac110007 in namespace e2e-tests-emptydir-wrapper-kw7dd, will wait for the garbage collector to delete the pods Sep 9 18:23:50.912: INFO: Deleting ReplicationController wrapped-volume-race-50475ce1-f2c9-11ea-88c2-0242ac110007 took: 7.512503ms Sep 9 18:23:51.012: INFO: Terminating ReplicationController wrapped-volume-race-50475ce1-f2c9-11ea-88c2-0242ac110007 pods took: 100.358305ms STEP: Creating RC which spawns configmap-volume pods Sep 9 18:24:30.557: INFO: Pod name wrapped-volume-race-b3ae857c-f2c9-11ea-88c2-0242ac110007: Found 0 pods out of 5 Sep 9 18:24:35.564: INFO: Pod name wrapped-volume-race-b3ae857c-f2c9-11ea-88c2-0242ac110007: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-b3ae857c-f2c9-11ea-88c2-0242ac110007 in namespace e2e-tests-emptydir-wrapper-kw7dd, will wait for the garbage collector to delete the pods Sep 9 18:27:09.646: INFO: Deleting ReplicationController wrapped-volume-race-b3ae857c-f2c9-11ea-88c2-0242ac110007 took: 8.773786ms Sep 9 18:27:09.746: INFO: Terminating ReplicationController wrapped-volume-race-b3ae857c-f2c9-11ea-88c2-0242ac110007 pods took: 100.238805ms STEP: Creating RC which spawns configmap-volume pods Sep 9 18:27:49.575: INFO: Pod name wrapped-volume-race-2a50b477-f2ca-11ea-88c2-0242ac110007: Found 0 pods out of 5 Sep 9 18:27:54.584: INFO: Pod name wrapped-volume-race-2a50b477-f2ca-11ea-88c2-0242ac110007: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-2a50b477-f2ca-11ea-88c2-0242ac110007 in namespace e2e-tests-emptydir-wrapper-kw7dd, will wait for the garbage collector to delete the pods Sep 9 18:29:46.665: INFO: Deleting ReplicationController wrapped-volume-race-2a50b477-f2ca-11ea-88c2-0242ac110007 took: 7.103966ms Sep 9 18:29:46.766: INFO: Terminating ReplicationController wrapped-volume-race-2a50b477-f2ca-11ea-88c2-0242ac110007 pods took: 100.204654ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:30:30.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-kw7dd" for this suite. Sep 9 18:30:38.368: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:30:38.419: INFO: namespace: e2e-tests-emptydir-wrapper-kw7dd, resource: bindings, ignored listing per whitelist Sep 9 18:30:38.444: INFO: namespace e2e-tests-emptydir-wrapper-kw7dd deletion completed in 8.085438653s • [SLOW TEST:535.509 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:30:38.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:30:42.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-28fnl" for this suite. Sep 9 18:30:48.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:30:48.746: INFO: namespace: e2e-tests-emptydir-wrapper-28fnl, resource: bindings, ignored listing per whitelist Sep 9 18:30:48.784: INFO: namespace e2e-tests-emptydir-wrapper-28fnl deletion completed in 6.137141151s • [SLOW TEST:10.340 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:30:48.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-6sf5b STEP: creating a selector STEP: Creating the service pods in kubernetes Sep 9 18:30:48.853: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Sep 9 18:31:14.956: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.17:8080/dial?request=hostName&protocol=http&host=10.244.1.16&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-6sf5b PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 9 18:31:14.956: INFO: >>> kubeConfig: /root/.kube/config I0909 18:31:14.986987 6 log.go:172] (0xc00092de40) (0xc001f31b80) Create stream I0909 18:31:14.987016 6 log.go:172] (0xc00092de40) (0xc001f31b80) Stream added, broadcasting: 1 I0909 18:31:14.989476 6 log.go:172] (0xc00092de40) Reply frame received for 1 I0909 18:31:14.989512 6 log.go:172] (0xc00092de40) (0xc0019b3cc0) Create stream I0909 18:31:14.989525 6 log.go:172] (0xc00092de40) (0xc0019b3cc0) Stream added, broadcasting: 3 I0909 18:31:14.990374 6 log.go:172] (0xc00092de40) Reply frame received for 3 I0909 18:31:14.990449 6 log.go:172] (0xc00092de40) (0xc001ba06e0) Create stream I0909 18:31:14.990468 6 log.go:172] (0xc00092de40) (0xc001ba06e0) Stream added, broadcasting: 5 I0909 18:31:14.991298 6 log.go:172] (0xc00092de40) Reply frame received for 5 I0909 18:31:15.055362 6 log.go:172] (0xc00092de40) Data frame received for 3 I0909 18:31:15.055386 6 log.go:172] (0xc0019b3cc0) (3) Data frame handling I0909 18:31:15.055400 6 log.go:172] (0xc0019b3cc0) (3) Data frame sent I0909 18:31:15.055988 6 log.go:172] (0xc00092de40) Data frame received for 3 I0909 18:31:15.056076 6 log.go:172] (0xc0019b3cc0) (3) Data frame handling I0909 18:31:15.056399 6 log.go:172] (0xc00092de40) Data frame received for 5 I0909 18:31:15.056425 6 log.go:172] (0xc001ba06e0) (5) Data frame handling I0909 18:31:15.057851 6 log.go:172] (0xc00092de40) Data frame received for 1 I0909 18:31:15.057865 6 log.go:172] (0xc001f31b80) (1) Data frame handling I0909 18:31:15.057872 6 log.go:172] (0xc001f31b80) (1) Data frame sent I0909 18:31:15.058030 6 log.go:172] (0xc00092de40) (0xc001f31b80) Stream removed, broadcasting: 1 I0909 18:31:15.058087 6 log.go:172] (0xc00092de40) Go away received I0909 18:31:15.058163 6 log.go:172] (0xc00092de40) (0xc001f31b80) Stream removed, broadcasting: 1 I0909 18:31:15.058178 6 log.go:172] (0xc00092de40) (0xc0019b3cc0) Stream removed, broadcasting: 3 I0909 18:31:15.058185 6 log.go:172] (0xc00092de40) (0xc001ba06e0) Stream removed, broadcasting: 5 Sep 9 18:31:15.058: INFO: Waiting for endpoints: map[] Sep 9 18:31:15.061: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.17:8080/dial?request=hostName&protocol=http&host=10.244.2.246&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-6sf5b PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 9 18:31:15.061: INFO: >>> kubeConfig: /root/.kube/config I0909 18:31:15.093909 6 log.go:172] (0xc0024b02c0) (0xc001ba0aa0) Create stream I0909 18:31:15.093937 6 log.go:172] (0xc0024b02c0) (0xc001ba0aa0) Stream added, broadcasting: 1 I0909 18:31:15.096140 6 log.go:172] (0xc0024b02c0) Reply frame received for 1 I0909 18:31:15.096216 6 log.go:172] (0xc0024b02c0) (0xc0019b3d60) Create stream I0909 18:31:15.096245 6 log.go:172] (0xc0024b02c0) (0xc0019b3d60) Stream added, broadcasting: 3 I0909 18:31:15.097222 6 log.go:172] (0xc0024b02c0) Reply frame received for 3 I0909 18:31:15.097251 6 log.go:172] (0xc0024b02c0) (0xc0019b3e00) Create stream I0909 18:31:15.097262 6 log.go:172] (0xc0024b02c0) (0xc0019b3e00) Stream added, broadcasting: 5 I0909 18:31:15.098078 6 log.go:172] (0xc0024b02c0) Reply frame received for 5 I0909 18:31:15.178601 6 log.go:172] (0xc0024b02c0) Data frame received for 3 I0909 18:31:15.178663 6 log.go:172] (0xc0019b3d60) (3) Data frame handling I0909 18:31:15.178709 6 log.go:172] (0xc0019b3d60) (3) Data frame sent I0909 18:31:15.179119 6 log.go:172] (0xc0024b02c0) Data frame received for 3 I0909 18:31:15.179165 6 log.go:172] (0xc0019b3d60) (3) Data frame handling I0909 18:31:15.179197 6 log.go:172] (0xc0024b02c0) Data frame received for 5 I0909 18:31:15.179214 6 log.go:172] (0xc0019b3e00) (5) Data frame handling I0909 18:31:15.180714 6 log.go:172] (0xc0024b02c0) Data frame received for 1 I0909 18:31:15.180737 6 log.go:172] (0xc001ba0aa0) (1) Data frame handling I0909 18:31:15.180748 6 log.go:172] (0xc001ba0aa0) (1) Data frame sent I0909 18:31:15.180763 6 log.go:172] (0xc0024b02c0) (0xc001ba0aa0) Stream removed, broadcasting: 1 I0909 18:31:15.180795 6 log.go:172] (0xc0024b02c0) Go away received I0909 18:31:15.180845 6 log.go:172] (0xc0024b02c0) (0xc001ba0aa0) Stream removed, broadcasting: 1 I0909 18:31:15.180857 6 log.go:172] (0xc0024b02c0) (0xc0019b3d60) Stream removed, broadcasting: 3 I0909 18:31:15.180873 6 log.go:172] (0xc0024b02c0) (0xc0019b3e00) Stream removed, broadcasting: 5 Sep 9 18:31:15.180: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:31:15.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-6sf5b" for this suite. Sep 9 18:31:37.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:31:37.243: INFO: namespace: e2e-tests-pod-network-test-6sf5b, resource: bindings, ignored listing per whitelist Sep 9 18:31:37.293: INFO: namespace e2e-tests-pod-network-test-6sf5b deletion completed in 22.10835612s • [SLOW TEST:48.508 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:31:37.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Sep 9 18:31:37.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-sns8b' Sep 9 18:31:39.819: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Sep 9 18:31:39.819: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Sep 9 18:31:39.850: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Sep 9 18:31:39.876: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Sep 9 18:31:39.885: INFO: scanned /root for discovery docs: Sep 9 18:31:39.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-sns8b' Sep 9 18:31:56.703: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Sep 9 18:31:56.703: INFO: stdout: "Created e2e-test-nginx-rc-b305971b00a4849b904855674fafb83e\nScaling up e2e-test-nginx-rc-b305971b00a4849b904855674fafb83e from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-b305971b00a4849b904855674fafb83e up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-b305971b00a4849b904855674fafb83e to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Sep 9 18:31:56.703: INFO: stdout: "Created e2e-test-nginx-rc-b305971b00a4849b904855674fafb83e\nScaling up e2e-test-nginx-rc-b305971b00a4849b904855674fafb83e from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-b305971b00a4849b904855674fafb83e up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-b305971b00a4849b904855674fafb83e to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Sep 9 18:31:56.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-sns8b' Sep 9 18:31:56.801: INFO: stderr: "" Sep 9 18:31:56.802: INFO: stdout: "e2e-test-nginx-rc-b305971b00a4849b904855674fafb83e-2s6xc " Sep 9 18:31:56.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-b305971b00a4849b904855674fafb83e-2s6xc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sns8b' Sep 9 18:31:56.895: INFO: stderr: "" Sep 9 18:31:56.895: INFO: stdout: "true" Sep 9 18:31:56.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-b305971b00a4849b904855674fafb83e-2s6xc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sns8b' Sep 9 18:31:56.996: INFO: stderr: "" Sep 9 18:31:56.996: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Sep 9 18:31:56.996: INFO: e2e-test-nginx-rc-b305971b00a4849b904855674fafb83e-2s6xc is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 Sep 9 18:31:56.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-sns8b' Sep 9 18:31:57.114: INFO: stderr: "" Sep 9 18:31:57.114: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:31:57.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-sns8b" for this suite. Sep 9 18:32:19.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:32:19.218: INFO: namespace: e2e-tests-kubectl-sns8b, resource: bindings, ignored listing per whitelist Sep 9 18:32:19.220: INFO: namespace e2e-tests-kubectl-sns8b deletion completed in 22.103396585s • [SLOW TEST:41.927 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:32:19.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Sep 9 18:32:27.457: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Sep 9 18:32:27.487: INFO: Pod pod-with-poststart-exec-hook still exists Sep 9 18:32:29.487: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Sep 9 18:32:29.492: INFO: Pod pod-with-poststart-exec-hook still exists Sep 9 18:32:31.487: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Sep 9 18:32:31.492: INFO: Pod pod-with-poststart-exec-hook still exists Sep 9 18:32:33.488: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Sep 9 18:32:33.492: INFO: Pod pod-with-poststart-exec-hook still exists Sep 9 18:32:35.487: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Sep 9 18:32:35.491: INFO: Pod pod-with-poststart-exec-hook still exists Sep 9 18:32:37.488: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Sep 9 18:32:37.492: INFO: Pod pod-with-poststart-exec-hook still exists Sep 9 18:32:39.488: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Sep 9 18:32:39.492: INFO: Pod pod-with-poststart-exec-hook still exists Sep 9 18:32:41.487: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Sep 9 18:32:41.492: INFO: Pod pod-with-poststart-exec-hook still exists Sep 9 18:32:43.487: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Sep 9 18:32:43.491: INFO: Pod pod-with-poststart-exec-hook still exists Sep 9 18:32:45.487: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Sep 9 18:32:45.492: INFO: Pod pod-with-poststart-exec-hook still exists Sep 9 18:32:47.488: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Sep 9 18:32:47.492: INFO: Pod pod-with-poststart-exec-hook still exists Sep 9 18:32:49.488: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Sep 9 18:32:49.491: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:32:49.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-v569c" for this suite. Sep 9 18:33:11.510: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:33:11.575: INFO: namespace: e2e-tests-container-lifecycle-hook-v569c, resource: bindings, ignored listing per whitelist Sep 9 18:33:11.582: INFO: namespace e2e-tests-container-lifecycle-hook-v569c deletion completed in 22.086566631s • [SLOW TEST:52.362 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:33:11.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-lqzmg [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-lqzmg STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-lqzmg STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-lqzmg STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-lqzmg Sep 9 18:33:15.747: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-lqzmg, name: ss-0, uid: ea85e35b-f2ca-11ea-b060-0242ac120006, status phase: Pending. Waiting for statefulset controller to delete. Sep 9 18:33:19.428: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-lqzmg, name: ss-0, uid: ea85e35b-f2ca-11ea-b060-0242ac120006, status phase: Failed. Waiting for statefulset controller to delete. Sep 9 18:33:19.453: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-lqzmg, name: ss-0, uid: ea85e35b-f2ca-11ea-b060-0242ac120006, status phase: Failed. Waiting for statefulset controller to delete. Sep 9 18:33:19.460: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-lqzmg STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-lqzmg STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-lqzmg and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Sep 9 18:33:23.595: INFO: Deleting all statefulset in ns e2e-tests-statefulset-lqzmg Sep 9 18:33:23.597: INFO: Scaling statefulset ss to 0 Sep 9 18:33:33.660: INFO: Waiting for statefulset status.replicas updated to 0 Sep 9 18:33:33.663: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:33:33.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-lqzmg" for this suite. Sep 9 18:33:39.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:33:39.733: INFO: namespace: e2e-tests-statefulset-lqzmg, resource: bindings, ignored listing per whitelist Sep 9 18:33:39.801: INFO: namespace e2e-tests-statefulset-lqzmg deletion completed in 6.124257173s • [SLOW TEST:28.218 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:33:39.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-fb2503f4-f2ca-11ea-88c2-0242ac110007 STEP: Creating secret with name s-test-opt-upd-fb25044d-f2ca-11ea-88c2-0242ac110007 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-fb2503f4-f2ca-11ea-88c2-0242ac110007 STEP: Updating secret s-test-opt-upd-fb25044d-f2ca-11ea-88c2-0242ac110007 STEP: Creating secret with name s-test-opt-create-fb250464-f2ca-11ea-88c2-0242ac110007 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:33:50.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-zdfw9" for this suite. Sep 9 18:34:12.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:34:12.188: INFO: namespace: e2e-tests-projected-zdfw9, resource: bindings, ignored listing per whitelist Sep 9 18:34:12.206: INFO: namespace e2e-tests-projected-zdfw9 deletion completed in 22.088174106s • [SLOW TEST:32.405 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:34:12.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-0e74b242-f2cb-11ea-88c2-0242ac110007 STEP: Creating a pod to test consume configMaps Sep 9 18:34:12.371: INFO: Waiting up to 5m0s for pod "pod-configmaps-0e78bd3d-f2cb-11ea-88c2-0242ac110007" in namespace "e2e-tests-configmap-4ptz9" to be "success or failure" Sep 9 18:34:12.403: INFO: Pod "pod-configmaps-0e78bd3d-f2cb-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 31.321338ms Sep 9 18:34:14.426: INFO: Pod "pod-configmaps-0e78bd3d-f2cb-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054215s Sep 9 18:34:16.431: INFO: Pod "pod-configmaps-0e78bd3d-f2cb-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059597466s STEP: Saw pod success Sep 9 18:34:16.431: INFO: Pod "pod-configmaps-0e78bd3d-f2cb-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 18:34:16.433: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-0e78bd3d-f2cb-11ea-88c2-0242ac110007 container configmap-volume-test: STEP: delete the pod Sep 9 18:34:16.507: INFO: Waiting for pod pod-configmaps-0e78bd3d-f2cb-11ea-88c2-0242ac110007 to disappear Sep 9 18:34:16.515: INFO: Pod pod-configmaps-0e78bd3d-f2cb-11ea-88c2-0242ac110007 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:34:16.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-4ptz9" for this suite. Sep 9 18:34:22.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:34:22.539: INFO: namespace: e2e-tests-configmap-4ptz9, resource: bindings, ignored listing per whitelist Sep 9 18:34:22.610: INFO: namespace e2e-tests-configmap-4ptz9 deletion completed in 6.091910657s • [SLOW TEST:10.404 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:34:22.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-14a86e5b-f2cb-11ea-88c2-0242ac110007 STEP: Creating a pod to test consume configMaps Sep 9 18:34:22.778: INFO: Waiting up to 5m0s for pod "pod-configmaps-14aa0392-f2cb-11ea-88c2-0242ac110007" in namespace "e2e-tests-configmap-wz8vz" to be "success or failure" Sep 9 18:34:22.784: INFO: Pod "pod-configmaps-14aa0392-f2cb-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.44002ms Sep 9 18:34:24.788: INFO: Pod "pod-configmaps-14aa0392-f2cb-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010369641s Sep 9 18:34:26.793: INFO: Pod "pod-configmaps-14aa0392-f2cb-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015056499s STEP: Saw pod success Sep 9 18:34:26.793: INFO: Pod "pod-configmaps-14aa0392-f2cb-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 18:34:26.795: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-14aa0392-f2cb-11ea-88c2-0242ac110007 container configmap-volume-test: STEP: delete the pod Sep 9 18:34:26.958: INFO: Waiting for pod pod-configmaps-14aa0392-f2cb-11ea-88c2-0242ac110007 to disappear Sep 9 18:34:26.976: INFO: Pod pod-configmaps-14aa0392-f2cb-11ea-88c2-0242ac110007 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:34:26.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-wz8vz" for this suite. Sep 9 18:34:32.994: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:34:33.081: INFO: namespace: e2e-tests-configmap-wz8vz, resource: bindings, ignored listing per whitelist Sep 9 18:34:33.106: INFO: namespace e2e-tests-configmap-wz8vz deletion completed in 6.126692933s • [SLOW TEST:10.496 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:34:33.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Sep 9 18:34:43.242: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-w64mh PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 9 18:34:43.242: INFO: >>> kubeConfig: /root/.kube/config I0909 18:34:43.273028 6 log.go:172] (0xc0024a02c0) (0xc0023f9720) Create stream I0909 18:34:43.273054 6 log.go:172] (0xc0024a02c0) (0xc0023f9720) Stream added, broadcasting: 1 I0909 18:34:43.276759 6 log.go:172] (0xc0024a02c0) Reply frame received for 1 I0909 18:34:43.276826 6 log.go:172] (0xc0024a02c0) (0xc001ad1a40) Create stream I0909 18:34:43.276842 6 log.go:172] (0xc0024a02c0) (0xc001ad1a40) Stream added, broadcasting: 3 I0909 18:34:43.279077 6 log.go:172] (0xc0024a02c0) Reply frame received for 3 I0909 18:34:43.279110 6 log.go:172] (0xc0024a02c0) (0xc000c22000) Create stream I0909 18:34:43.279127 6 log.go:172] (0xc0024a02c0) (0xc000c22000) Stream added, broadcasting: 5 I0909 18:34:43.279801 6 log.go:172] (0xc0024a02c0) Reply frame received for 5 I0909 18:34:43.356208 6 log.go:172] (0xc0024a02c0) Data frame received for 5 I0909 18:34:43.356255 6 log.go:172] (0xc000c22000) (5) Data frame handling I0909 18:34:43.356285 6 log.go:172] (0xc0024a02c0) Data frame received for 3 I0909 18:34:43.356300 6 log.go:172] (0xc001ad1a40) (3) Data frame handling I0909 18:34:43.356323 6 log.go:172] (0xc001ad1a40) (3) Data frame sent I0909 18:34:43.356334 6 log.go:172] (0xc0024a02c0) Data frame received for 3 I0909 18:34:43.356344 6 log.go:172] (0xc001ad1a40) (3) Data frame handling I0909 18:34:43.357350 6 log.go:172] (0xc0024a02c0) Data frame received for 1 I0909 18:34:43.357415 6 log.go:172] (0xc0023f9720) (1) Data frame handling I0909 18:34:43.357460 6 log.go:172] (0xc0023f9720) (1) Data frame sent I0909 18:34:43.357489 6 log.go:172] (0xc0024a02c0) (0xc0023f9720) Stream removed, broadcasting: 1 I0909 18:34:43.357509 6 log.go:172] (0xc0024a02c0) Go away received I0909 18:34:43.357686 6 log.go:172] (0xc0024a02c0) (0xc0023f9720) Stream removed, broadcasting: 1 I0909 18:34:43.357732 6 log.go:172] (0xc0024a02c0) (0xc001ad1a40) Stream removed, broadcasting: 3 I0909 18:34:43.357749 6 log.go:172] (0xc0024a02c0) (0xc000c22000) Stream removed, broadcasting: 5 Sep 9 18:34:43.357: INFO: Exec stderr: "" Sep 9 18:34:43.357: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-w64mh PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 9 18:34:43.357: INFO: >>> kubeConfig: /root/.kube/config I0909 18:34:43.385558 6 log.go:172] (0xc000fd0420) (0xc00043c3c0) Create stream I0909 18:34:43.385585 6 log.go:172] (0xc000fd0420) (0xc00043c3c0) Stream added, broadcasting: 1 I0909 18:34:43.387428 6 log.go:172] (0xc000fd0420) Reply frame received for 1 I0909 18:34:43.387455 6 log.go:172] (0xc000fd0420) (0xc00043c500) Create stream I0909 18:34:43.387464 6 log.go:172] (0xc000fd0420) (0xc00043c500) Stream added, broadcasting: 3 I0909 18:34:43.388489 6 log.go:172] (0xc000fd0420) Reply frame received for 3 I0909 18:34:43.388525 6 log.go:172] (0xc000fd0420) (0xc00218a000) Create stream I0909 18:34:43.388537 6 log.go:172] (0xc000fd0420) (0xc00218a000) Stream added, broadcasting: 5 I0909 18:34:43.389440 6 log.go:172] (0xc000fd0420) Reply frame received for 5 I0909 18:34:43.448626 6 log.go:172] (0xc000fd0420) Data frame received for 3 I0909 18:34:43.448658 6 log.go:172] (0xc00043c500) (3) Data frame handling I0909 18:34:43.448666 6 log.go:172] (0xc00043c500) (3) Data frame sent I0909 18:34:43.448671 6 log.go:172] (0xc000fd0420) Data frame received for 3 I0909 18:34:43.448674 6 log.go:172] (0xc00043c500) (3) Data frame handling I0909 18:34:43.448708 6 log.go:172] (0xc000fd0420) Data frame received for 5 I0909 18:34:43.448755 6 log.go:172] (0xc00218a000) (5) Data frame handling I0909 18:34:43.450080 6 log.go:172] (0xc000fd0420) Data frame received for 1 I0909 18:34:43.450103 6 log.go:172] (0xc00043c3c0) (1) Data frame handling I0909 18:34:43.450114 6 log.go:172] (0xc00043c3c0) (1) Data frame sent I0909 18:34:43.450122 6 log.go:172] (0xc000fd0420) (0xc00043c3c0) Stream removed, broadcasting: 1 I0909 18:34:43.450151 6 log.go:172] (0xc000fd0420) Go away received I0909 18:34:43.450195 6 log.go:172] (0xc000fd0420) (0xc00043c3c0) Stream removed, broadcasting: 1 I0909 18:34:43.450207 6 log.go:172] (0xc000fd0420) (0xc00043c500) Stream removed, broadcasting: 3 I0909 18:34:43.450214 6 log.go:172] (0xc000fd0420) (0xc00218a000) Stream removed, broadcasting: 5 Sep 9 18:34:43.450: INFO: Exec stderr: "" Sep 9 18:34:43.450: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-w64mh PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 9 18:34:43.450: INFO: >>> kubeConfig: /root/.kube/config I0909 18:34:43.481379 6 log.go:172] (0xc00088d600) (0xc00218a280) Create stream I0909 18:34:43.481408 6 log.go:172] (0xc00088d600) (0xc00218a280) Stream added, broadcasting: 1 I0909 18:34:43.483678 6 log.go:172] (0xc00088d600) Reply frame received for 1 I0909 18:34:43.483721 6 log.go:172] (0xc00088d600) (0xc00043c640) Create stream I0909 18:34:43.483733 6 log.go:172] (0xc00088d600) (0xc00043c640) Stream added, broadcasting: 3 I0909 18:34:43.484698 6 log.go:172] (0xc00088d600) Reply frame received for 3 I0909 18:34:43.484728 6 log.go:172] (0xc00088d600) (0xc002112000) Create stream I0909 18:34:43.484741 6 log.go:172] (0xc00088d600) (0xc002112000) Stream added, broadcasting: 5 I0909 18:34:43.485650 6 log.go:172] (0xc00088d600) Reply frame received for 5 I0909 18:34:43.542632 6 log.go:172] (0xc00088d600) Data frame received for 5 I0909 18:34:43.542673 6 log.go:172] (0xc002112000) (5) Data frame handling I0909 18:34:43.542704 6 log.go:172] (0xc00088d600) Data frame received for 3 I0909 18:34:43.542723 6 log.go:172] (0xc00043c640) (3) Data frame handling I0909 18:34:43.542740 6 log.go:172] (0xc00043c640) (3) Data frame sent I0909 18:34:43.542754 6 log.go:172] (0xc00088d600) Data frame received for 3 I0909 18:34:43.542770 6 log.go:172] (0xc00043c640) (3) Data frame handling I0909 18:34:43.545382 6 log.go:172] (0xc00088d600) Data frame received for 1 I0909 18:34:43.545420 6 log.go:172] (0xc00218a280) (1) Data frame handling I0909 18:34:43.545439 6 log.go:172] (0xc00218a280) (1) Data frame sent I0909 18:34:43.545468 6 log.go:172] (0xc00088d600) (0xc00218a280) Stream removed, broadcasting: 1 I0909 18:34:43.545509 6 log.go:172] (0xc00088d600) Go away received I0909 18:34:43.545875 6 log.go:172] (0xc00088d600) (0xc00218a280) Stream removed, broadcasting: 1 I0909 18:34:43.545904 6 log.go:172] (0xc00088d600) (0xc00043c640) Stream removed, broadcasting: 3 I0909 18:34:43.545932 6 log.go:172] (0xc00088d600) (0xc002112000) Stream removed, broadcasting: 5 Sep 9 18:34:43.545: INFO: Exec stderr: "" Sep 9 18:34:43.546: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-w64mh PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 9 18:34:43.546: INFO: >>> kubeConfig: /root/.kube/config I0909 18:34:43.571838 6 log.go:172] (0xc00088dce0) (0xc00218a500) Create stream I0909 18:34:43.571866 6 log.go:172] (0xc00088dce0) (0xc00218a500) Stream added, broadcasting: 1 I0909 18:34:43.573996 6 log.go:172] (0xc00088dce0) Reply frame received for 1 I0909 18:34:43.574040 6 log.go:172] (0xc00088dce0) (0xc00218a5a0) Create stream I0909 18:34:43.574056 6 log.go:172] (0xc00088dce0) (0xc00218a5a0) Stream added, broadcasting: 3 I0909 18:34:43.575292 6 log.go:172] (0xc00088dce0) Reply frame received for 3 I0909 18:34:43.575333 6 log.go:172] (0xc00088dce0) (0xc0019ec000) Create stream I0909 18:34:43.575357 6 log.go:172] (0xc00088dce0) (0xc0019ec000) Stream added, broadcasting: 5 I0909 18:34:43.576468 6 log.go:172] (0xc00088dce0) Reply frame received for 5 I0909 18:34:43.653103 6 log.go:172] (0xc00088dce0) Data frame received for 5 I0909 18:34:43.653168 6 log.go:172] (0xc0019ec000) (5) Data frame handling I0909 18:34:43.653211 6 log.go:172] (0xc00088dce0) Data frame received for 3 I0909 18:34:43.653236 6 log.go:172] (0xc00218a5a0) (3) Data frame handling I0909 18:34:43.653264 6 log.go:172] (0xc00218a5a0) (3) Data frame sent I0909 18:34:43.653328 6 log.go:172] (0xc00088dce0) Data frame received for 3 I0909 18:34:43.653352 6 log.go:172] (0xc00218a5a0) (3) Data frame handling I0909 18:34:43.654786 6 log.go:172] (0xc00088dce0) Data frame received for 1 I0909 18:34:43.654811 6 log.go:172] (0xc00218a500) (1) Data frame handling I0909 18:34:43.654838 6 log.go:172] (0xc00218a500) (1) Data frame sent I0909 18:34:43.654858 6 log.go:172] (0xc00088dce0) (0xc00218a500) Stream removed, broadcasting: 1 I0909 18:34:43.654880 6 log.go:172] (0xc00088dce0) Go away received I0909 18:34:43.655004 6 log.go:172] (0xc00088dce0) (0xc00218a500) Stream removed, broadcasting: 1 I0909 18:34:43.655023 6 log.go:172] (0xc00088dce0) (0xc00218a5a0) Stream removed, broadcasting: 3 I0909 18:34:43.655039 6 log.go:172] (0xc00088dce0) (0xc0019ec000) Stream removed, broadcasting: 5 Sep 9 18:34:43.655: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Sep 9 18:34:43.655: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-w64mh PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 9 18:34:43.655: INFO: >>> kubeConfig: /root/.kube/config I0909 18:34:43.685331 6 log.go:172] (0xc000fd08f0) (0xc00043cd20) Create stream I0909 18:34:43.685364 6 log.go:172] (0xc000fd08f0) (0xc00043cd20) Stream added, broadcasting: 1 I0909 18:34:43.687705 6 log.go:172] (0xc000fd08f0) Reply frame received for 1 I0909 18:34:43.687743 6 log.go:172] (0xc000fd08f0) (0xc0021120a0) Create stream I0909 18:34:43.687753 6 log.go:172] (0xc000fd08f0) (0xc0021120a0) Stream added, broadcasting: 3 I0909 18:34:43.688895 6 log.go:172] (0xc000fd08f0) Reply frame received for 3 I0909 18:34:43.688933 6 log.go:172] (0xc000fd08f0) (0xc00218a640) Create stream I0909 18:34:43.688947 6 log.go:172] (0xc000fd08f0) (0xc00218a640) Stream added, broadcasting: 5 I0909 18:34:43.689904 6 log.go:172] (0xc000fd08f0) Reply frame received for 5 I0909 18:34:43.747231 6 log.go:172] (0xc000fd08f0) Data frame received for 5 I0909 18:34:43.747281 6 log.go:172] (0xc000fd08f0) Data frame received for 3 I0909 18:34:43.747338 6 log.go:172] (0xc0021120a0) (3) Data frame handling I0909 18:34:43.747354 6 log.go:172] (0xc0021120a0) (3) Data frame sent I0909 18:34:43.747370 6 log.go:172] (0xc000fd08f0) Data frame received for 3 I0909 18:34:43.747384 6 log.go:172] (0xc0021120a0) (3) Data frame handling I0909 18:34:43.747413 6 log.go:172] (0xc00218a640) (5) Data frame handling I0909 18:34:43.748987 6 log.go:172] (0xc000fd08f0) Data frame received for 1 I0909 18:34:43.749018 6 log.go:172] (0xc00043cd20) (1) Data frame handling I0909 18:34:43.749032 6 log.go:172] (0xc00043cd20) (1) Data frame sent I0909 18:34:43.749049 6 log.go:172] (0xc000fd08f0) (0xc00043cd20) Stream removed, broadcasting: 1 I0909 18:34:43.749074 6 log.go:172] (0xc000fd08f0) Go away received I0909 18:34:43.749222 6 log.go:172] (0xc000fd08f0) (0xc00043cd20) Stream removed, broadcasting: 1 I0909 18:34:43.749249 6 log.go:172] (0xc000fd08f0) (0xc0021120a0) Stream removed, broadcasting: 3 I0909 18:34:43.749265 6 log.go:172] (0xc000fd08f0) (0xc00218a640) Stream removed, broadcasting: 5 Sep 9 18:34:43.749: INFO: Exec stderr: "" Sep 9 18:34:43.749: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-w64mh PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 9 18:34:43.749: INFO: >>> kubeConfig: /root/.kube/config I0909 18:34:43.787924 6 log.go:172] (0xc0024a0580) (0xc002112320) Create stream I0909 18:34:43.787954 6 log.go:172] (0xc0024a0580) (0xc002112320) Stream added, broadcasting: 1 I0909 18:34:43.789921 6 log.go:172] (0xc0024a0580) Reply frame received for 1 I0909 18:34:43.789957 6 log.go:172] (0xc0024a0580) (0xc0021123c0) Create stream I0909 18:34:43.789975 6 log.go:172] (0xc0024a0580) (0xc0021123c0) Stream added, broadcasting: 3 I0909 18:34:43.791036 6 log.go:172] (0xc0024a0580) Reply frame received for 3 I0909 18:34:43.791085 6 log.go:172] (0xc0024a0580) (0xc000c22140) Create stream I0909 18:34:43.791101 6 log.go:172] (0xc0024a0580) (0xc000c22140) Stream added, broadcasting: 5 I0909 18:34:43.792131 6 log.go:172] (0xc0024a0580) Reply frame received for 5 I0909 18:34:43.843484 6 log.go:172] (0xc0024a0580) Data frame received for 3 I0909 18:34:43.843531 6 log.go:172] (0xc0021123c0) (3) Data frame handling I0909 18:34:43.843548 6 log.go:172] (0xc0021123c0) (3) Data frame sent I0909 18:34:43.843561 6 log.go:172] (0xc0024a0580) Data frame received for 3 I0909 18:34:43.843573 6 log.go:172] (0xc0021123c0) (3) Data frame handling I0909 18:34:43.843627 6 log.go:172] (0xc0024a0580) Data frame received for 5 I0909 18:34:43.843659 6 log.go:172] (0xc000c22140) (5) Data frame handling I0909 18:34:43.844964 6 log.go:172] (0xc0024a0580) Data frame received for 1 I0909 18:34:43.844988 6 log.go:172] (0xc002112320) (1) Data frame handling I0909 18:34:43.845000 6 log.go:172] (0xc002112320) (1) Data frame sent I0909 18:34:43.845016 6 log.go:172] (0xc0024a0580) (0xc002112320) Stream removed, broadcasting: 1 I0909 18:34:43.845040 6 log.go:172] (0xc0024a0580) Go away received I0909 18:34:43.845162 6 log.go:172] (0xc0024a0580) (0xc002112320) Stream removed, broadcasting: 1 I0909 18:34:43.845183 6 log.go:172] (0xc0024a0580) (0xc0021123c0) Stream removed, broadcasting: 3 I0909 18:34:43.845200 6 log.go:172] (0xc0024a0580) (0xc000c22140) Stream removed, broadcasting: 5 Sep 9 18:34:43.845: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Sep 9 18:34:43.845: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-w64mh PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 9 18:34:43.845: INFO: >>> kubeConfig: /root/.kube/config I0909 18:34:43.876467 6 log.go:172] (0xc0024a0a50) (0xc002112640) Create stream I0909 18:34:43.876496 6 log.go:172] (0xc0024a0a50) (0xc002112640) Stream added, broadcasting: 1 I0909 18:34:43.878239 6 log.go:172] (0xc0024a0a50) Reply frame received for 1 I0909 18:34:43.878285 6 log.go:172] (0xc0024a0a50) (0xc00218a6e0) Create stream I0909 18:34:43.878295 6 log.go:172] (0xc0024a0a50) (0xc00218a6e0) Stream added, broadcasting: 3 I0909 18:34:43.879101 6 log.go:172] (0xc0024a0a50) Reply frame received for 3 I0909 18:34:43.879147 6 log.go:172] (0xc0024a0a50) (0xc0019ec0a0) Create stream I0909 18:34:43.879164 6 log.go:172] (0xc0024a0a50) (0xc0019ec0a0) Stream added, broadcasting: 5 I0909 18:34:43.879796 6 log.go:172] (0xc0024a0a50) Reply frame received for 5 I0909 18:34:43.935602 6 log.go:172] (0xc0024a0a50) Data frame received for 5 I0909 18:34:43.935655 6 log.go:172] (0xc0019ec0a0) (5) Data frame handling I0909 18:34:43.935709 6 log.go:172] (0xc0024a0a50) Data frame received for 3 I0909 18:34:43.935730 6 log.go:172] (0xc00218a6e0) (3) Data frame handling I0909 18:34:43.935780 6 log.go:172] (0xc00218a6e0) (3) Data frame sent I0909 18:34:43.935823 6 log.go:172] (0xc0024a0a50) Data frame received for 3 I0909 18:34:43.935852 6 log.go:172] (0xc00218a6e0) (3) Data frame handling I0909 18:34:43.937946 6 log.go:172] (0xc0024a0a50) Data frame received for 1 I0909 18:34:43.937994 6 log.go:172] (0xc002112640) (1) Data frame handling I0909 18:34:43.938026 6 log.go:172] (0xc002112640) (1) Data frame sent I0909 18:34:43.938051 6 log.go:172] (0xc0024a0a50) (0xc002112640) Stream removed, broadcasting: 1 I0909 18:34:43.938101 6 log.go:172] (0xc0024a0a50) Go away received I0909 18:34:43.938325 6 log.go:172] (0xc0024a0a50) (0xc002112640) Stream removed, broadcasting: 1 I0909 18:34:43.938364 6 log.go:172] (0xc0024a0a50) (0xc00218a6e0) Stream removed, broadcasting: 3 I0909 18:34:43.938391 6 log.go:172] (0xc0024a0a50) (0xc0019ec0a0) Stream removed, broadcasting: 5 Sep 9 18:34:43.938: INFO: Exec stderr: "" Sep 9 18:34:43.938: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-w64mh PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 9 18:34:43.938: INFO: >>> kubeConfig: /root/.kube/config I0909 18:34:43.968952 6 log.go:172] (0xc0024a0f20) (0xc0021128c0) Create stream I0909 18:34:43.968982 6 log.go:172] (0xc0024a0f20) (0xc0021128c0) Stream added, broadcasting: 1 I0909 18:34:43.974774 6 log.go:172] (0xc0024a0f20) Reply frame received for 1 I0909 18:34:43.974820 6 log.go:172] (0xc0024a0f20) (0xc002112960) Create stream I0909 18:34:43.974845 6 log.go:172] (0xc0024a0f20) (0xc002112960) Stream added, broadcasting: 3 I0909 18:34:43.976303 6 log.go:172] (0xc0024a0f20) Reply frame received for 3 I0909 18:34:43.976342 6 log.go:172] (0xc0024a0f20) (0xc0019ec140) Create stream I0909 18:34:43.976357 6 log.go:172] (0xc0024a0f20) (0xc0019ec140) Stream added, broadcasting: 5 I0909 18:34:43.977276 6 log.go:172] (0xc0024a0f20) Reply frame received for 5 I0909 18:34:44.035251 6 log.go:172] (0xc0024a0f20) Data frame received for 3 I0909 18:34:44.035283 6 log.go:172] (0xc002112960) (3) Data frame handling I0909 18:34:44.035305 6 log.go:172] (0xc0024a0f20) Data frame received for 5 I0909 18:34:44.035341 6 log.go:172] (0xc0019ec140) (5) Data frame handling I0909 18:34:44.035367 6 log.go:172] (0xc002112960) (3) Data frame sent I0909 18:34:44.035389 6 log.go:172] (0xc0024a0f20) Data frame received for 3 I0909 18:34:44.035403 6 log.go:172] (0xc002112960) (3) Data frame handling I0909 18:34:44.037380 6 log.go:172] (0xc0024a0f20) Data frame received for 1 I0909 18:34:44.037429 6 log.go:172] (0xc0021128c0) (1) Data frame handling I0909 18:34:44.037464 6 log.go:172] (0xc0021128c0) (1) Data frame sent I0909 18:34:44.037499 6 log.go:172] (0xc0024a0f20) (0xc0021128c0) Stream removed, broadcasting: 1 I0909 18:34:44.037544 6 log.go:172] (0xc0024a0f20) Go away received I0909 18:34:44.037656 6 log.go:172] (0xc0024a0f20) (0xc0021128c0) Stream removed, broadcasting: 1 I0909 18:34:44.037684 6 log.go:172] (0xc0024a0f20) (0xc002112960) Stream removed, broadcasting: 3 I0909 18:34:44.037699 6 log.go:172] (0xc0024a0f20) (0xc0019ec140) Stream removed, broadcasting: 5 Sep 9 18:34:44.037: INFO: Exec stderr: "" Sep 9 18:34:44.037: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-w64mh PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 9 18:34:44.037: INFO: >>> kubeConfig: /root/.kube/config I0909 18:34:44.069734 6 log.go:172] (0xc000fd0dc0) (0xc00043d9a0) Create stream I0909 18:34:44.069756 6 log.go:172] (0xc000fd0dc0) (0xc00043d9a0) Stream added, broadcasting: 1 I0909 18:34:44.071429 6 log.go:172] (0xc000fd0dc0) Reply frame received for 1 I0909 18:34:44.071465 6 log.go:172] (0xc000fd0dc0) (0xc00043dea0) Create stream I0909 18:34:44.071478 6 log.go:172] (0xc000fd0dc0) (0xc00043dea0) Stream added, broadcasting: 3 I0909 18:34:44.072467 6 log.go:172] (0xc000fd0dc0) Reply frame received for 3 I0909 18:34:44.072504 6 log.go:172] (0xc000fd0dc0) (0xc000c22280) Create stream I0909 18:34:44.072517 6 log.go:172] (0xc000fd0dc0) (0xc000c22280) Stream added, broadcasting: 5 I0909 18:34:44.073469 6 log.go:172] (0xc000fd0dc0) Reply frame received for 5 I0909 18:34:44.136676 6 log.go:172] (0xc000fd0dc0) Data frame received for 3 I0909 18:34:44.136722 6 log.go:172] (0xc00043dea0) (3) Data frame handling I0909 18:34:44.136746 6 log.go:172] (0xc00043dea0) (3) Data frame sent I0909 18:34:44.136782 6 log.go:172] (0xc000fd0dc0) Data frame received for 3 I0909 18:34:44.136793 6 log.go:172] (0xc00043dea0) (3) Data frame handling I0909 18:34:44.136948 6 log.go:172] (0xc000fd0dc0) Data frame received for 5 I0909 18:34:44.136983 6 log.go:172] (0xc000c22280) (5) Data frame handling I0909 18:34:44.138463 6 log.go:172] (0xc000fd0dc0) Data frame received for 1 I0909 18:34:44.138480 6 log.go:172] (0xc00043d9a0) (1) Data frame handling I0909 18:34:44.138495 6 log.go:172] (0xc00043d9a0) (1) Data frame sent I0909 18:34:44.138507 6 log.go:172] (0xc000fd0dc0) (0xc00043d9a0) Stream removed, broadcasting: 1 I0909 18:34:44.138592 6 log.go:172] (0xc000fd0dc0) Go away received I0909 18:34:44.138631 6 log.go:172] (0xc000fd0dc0) (0xc00043d9a0) Stream removed, broadcasting: 1 I0909 18:34:44.138682 6 log.go:172] (0xc000fd0dc0) (0xc00043dea0) Stream removed, broadcasting: 3 I0909 18:34:44.138698 6 log.go:172] (0xc000fd0dc0) (0xc000c22280) Stream removed, broadcasting: 5 Sep 9 18:34:44.138: INFO: Exec stderr: "" Sep 9 18:34:44.138: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-w64mh PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 9 18:34:44.138: INFO: >>> kubeConfig: /root/.kube/config I0909 18:34:44.172568 6 log.go:172] (0xc00092de40) (0xc0019ec320) Create stream I0909 18:34:44.172592 6 log.go:172] (0xc00092de40) (0xc0019ec320) Stream added, broadcasting: 1 I0909 18:34:44.174703 6 log.go:172] (0xc00092de40) Reply frame received for 1 I0909 18:34:44.174739 6 log.go:172] (0xc00092de40) (0xc002112a00) Create stream I0909 18:34:44.174750 6 log.go:172] (0xc00092de40) (0xc002112a00) Stream added, broadcasting: 3 I0909 18:34:44.175738 6 log.go:172] (0xc00092de40) Reply frame received for 3 I0909 18:34:44.175791 6 log.go:172] (0xc00092de40) (0xc000c22460) Create stream I0909 18:34:44.175806 6 log.go:172] (0xc00092de40) (0xc000c22460) Stream added, broadcasting: 5 I0909 18:34:44.176868 6 log.go:172] (0xc00092de40) Reply frame received for 5 I0909 18:34:44.256349 6 log.go:172] (0xc00092de40) Data frame received for 3 I0909 18:34:44.256369 6 log.go:172] (0xc002112a00) (3) Data frame handling I0909 18:34:44.256383 6 log.go:172] (0xc002112a00) (3) Data frame sent I0909 18:34:44.256390 6 log.go:172] (0xc00092de40) Data frame received for 3 I0909 18:34:44.256397 6 log.go:172] (0xc002112a00) (3) Data frame handling I0909 18:34:44.256756 6 log.go:172] (0xc00092de40) Data frame received for 5 I0909 18:34:44.256780 6 log.go:172] (0xc000c22460) (5) Data frame handling I0909 18:34:44.258333 6 log.go:172] (0xc00092de40) Data frame received for 1 I0909 18:34:44.258346 6 log.go:172] (0xc0019ec320) (1) Data frame handling I0909 18:34:44.258362 6 log.go:172] (0xc0019ec320) (1) Data frame sent I0909 18:34:44.258450 6 log.go:172] (0xc00092de40) (0xc0019ec320) Stream removed, broadcasting: 1 I0909 18:34:44.258496 6 log.go:172] (0xc00092de40) Go away received I0909 18:34:44.258519 6 log.go:172] (0xc00092de40) (0xc0019ec320) Stream removed, broadcasting: 1 I0909 18:34:44.258540 6 log.go:172] (0xc00092de40) (0xc002112a00) Stream removed, broadcasting: 3 I0909 18:34:44.258556 6 log.go:172] (0xc00092de40) (0xc000c22460) Stream removed, broadcasting: 5 Sep 9 18:34:44.258: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:34:44.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-w64mh" for this suite. Sep 9 18:35:34.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:35:34.282: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-w64mh, resource: bindings, ignored listing per whitelist Sep 9 18:35:34.355: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-w64mh deletion completed in 50.09310547s • [SLOW TEST:61.248 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:35:34.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Sep 9 18:35:42.547: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 9 18:35:42.560: INFO: Pod pod-with-prestop-http-hook still exists Sep 9 18:35:44.560: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 9 18:35:44.564: INFO: Pod pod-with-prestop-http-hook still exists Sep 9 18:35:46.560: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 9 18:35:46.564: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:35:46.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-6nmbw" for this suite. Sep 9 18:36:08.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:36:08.655: INFO: namespace: e2e-tests-container-lifecycle-hook-6nmbw, resource: bindings, ignored listing per whitelist Sep 9 18:36:08.683: INFO: namespace e2e-tests-container-lifecycle-hook-6nmbw deletion completed in 22.106734675s • [SLOW TEST:34.328 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:36:08.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-q855z STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-q855z to expose endpoints map[] Sep 9 18:36:08.848: INFO: Get endpoints failed (10.034412ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Sep 9 18:36:09.852: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-q855z exposes endpoints map[] (1.014434261s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-q855z STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-q855z to expose endpoints map[pod1:[100]] Sep 9 18:36:13.925: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-q855z exposes endpoints map[pod1:[100]] (4.065845444s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-q855z STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-q855z to expose endpoints map[pod1:[100] pod2:[101]] Sep 9 18:36:17.028: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-q855z exposes endpoints map[pod1:[100] pod2:[101]] (3.098445086s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-q855z STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-q855z to expose endpoints map[pod2:[101]] Sep 9 18:36:18.059: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-q855z exposes endpoints map[pod2:[101]] (1.027669876s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-q855z STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-q855z to expose endpoints map[] Sep 9 18:36:18.070: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-q855z exposes endpoints map[] (5.977675ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:36:18.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-q855z" for this suite. Sep 9 18:36:40.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:36:40.223: INFO: namespace: e2e-tests-services-q855z, resource: bindings, ignored listing per whitelist Sep 9 18:36:40.269: INFO: namespace e2e-tests-services-q855z deletion completed in 22.095614431s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:31.586 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:36:40.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Sep 9 18:36:40.417: INFO: Pod name pod-release: Found 0 pods out of 1 Sep 9 18:36:45.422: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:36:46.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-dstkn" for this suite. Sep 9 18:36:52.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:36:52.500: INFO: namespace: e2e-tests-replication-controller-dstkn, resource: bindings, ignored listing per whitelist Sep 9 18:36:52.531: INFO: namespace e2e-tests-replication-controller-dstkn deletion completed in 6.087241711s • [SLOW TEST:12.261 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:36:52.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Sep 9 18:36:52.719: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6e113a9c-f2cb-11ea-88c2-0242ac110007" in namespace "e2e-tests-projected-qm5l7" to be "success or failure" Sep 9 18:36:52.756: INFO: Pod "downwardapi-volume-6e113a9c-f2cb-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 37.605392ms Sep 9 18:36:54.761: INFO: Pod "downwardapi-volume-6e113a9c-f2cb-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042067056s Sep 9 18:36:56.765: INFO: Pod "downwardapi-volume-6e113a9c-f2cb-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046267301s STEP: Saw pod success Sep 9 18:36:56.765: INFO: Pod "downwardapi-volume-6e113a9c-f2cb-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 18:36:56.768: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-6e113a9c-f2cb-11ea-88c2-0242ac110007 container client-container: STEP: delete the pod Sep 9 18:36:56.784: INFO: Waiting for pod downwardapi-volume-6e113a9c-f2cb-11ea-88c2-0242ac110007 to disappear Sep 9 18:36:56.788: INFO: Pod downwardapi-volume-6e113a9c-f2cb-11ea-88c2-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:36:56.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-qm5l7" for this suite. Sep 9 18:37:02.804: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:37:02.860: INFO: namespace: e2e-tests-projected-qm5l7, resource: bindings, ignored listing per whitelist Sep 9 18:37:02.887: INFO: namespace e2e-tests-projected-qm5l7 deletion completed in 6.095422433s • [SLOW TEST:10.356 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:37:02.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Sep 9 18:37:02.995: INFO: Waiting up to 5m0s for pod "downwardapi-volume-742c9446-f2cb-11ea-88c2-0242ac110007" in namespace "e2e-tests-projected-cxgf2" to be "success or failure" Sep 9 18:37:03.005: INFO: Pod "downwardapi-volume-742c9446-f2cb-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.239804ms Sep 9 18:37:05.008: INFO: Pod "downwardapi-volume-742c9446-f2cb-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012793183s Sep 9 18:37:07.012: INFO: Pod "downwardapi-volume-742c9446-f2cb-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016855507s STEP: Saw pod success Sep 9 18:37:07.013: INFO: Pod "downwardapi-volume-742c9446-f2cb-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 18:37:07.016: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-742c9446-f2cb-11ea-88c2-0242ac110007 container client-container: STEP: delete the pod Sep 9 18:37:07.030: INFO: Waiting for pod downwardapi-volume-742c9446-f2cb-11ea-88c2-0242ac110007 to disappear Sep 9 18:37:07.051: INFO: Pod downwardapi-volume-742c9446-f2cb-11ea-88c2-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:37:07.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-cxgf2" for this suite. Sep 9 18:37:13.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:37:13.143: INFO: namespace: e2e-tests-projected-cxgf2, resource: bindings, ignored listing per whitelist Sep 9 18:37:13.144: INFO: namespace e2e-tests-projected-cxgf2 deletion completed in 6.088786441s • [SLOW TEST:10.257 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:37:13.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Sep 9 18:37:13.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-6z5rg' Sep 9 18:37:13.561: INFO: stderr: "" Sep 9 18:37:13.561: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Sep 9 18:37:13.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6z5rg' Sep 9 18:37:13.672: INFO: stderr: "" Sep 9 18:37:13.672: INFO: stdout: "update-demo-nautilus-75b7x update-demo-nautilus-h97zf " Sep 9 18:37:13.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-75b7x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6z5rg' Sep 9 18:37:13.768: INFO: stderr: "" Sep 9 18:37:13.768: INFO: stdout: "" Sep 9 18:37:13.768: INFO: update-demo-nautilus-75b7x is created but not running Sep 9 18:37:18.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-6z5rg' Sep 9 18:37:18.875: INFO: stderr: "" Sep 9 18:37:18.875: INFO: stdout: "update-demo-nautilus-75b7x update-demo-nautilus-h97zf " Sep 9 18:37:18.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-75b7x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6z5rg' Sep 9 18:37:18.990: INFO: stderr: "" Sep 9 18:37:18.990: INFO: stdout: "true" Sep 9 18:37:18.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-75b7x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6z5rg' Sep 9 18:37:19.087: INFO: stderr: "" Sep 9 18:37:19.087: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 9 18:37:19.087: INFO: validating pod update-demo-nautilus-75b7x Sep 9 18:37:19.100: INFO: got data: { "image": "nautilus.jpg" } Sep 9 18:37:19.100: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 9 18:37:19.100: INFO: update-demo-nautilus-75b7x is verified up and running Sep 9 18:37:19.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h97zf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6z5rg' Sep 9 18:37:19.193: INFO: stderr: "" Sep 9 18:37:19.193: INFO: stdout: "true" Sep 9 18:37:19.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h97zf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-6z5rg' Sep 9 18:37:19.289: INFO: stderr: "" Sep 9 18:37:19.289: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 9 18:37:19.289: INFO: validating pod update-demo-nautilus-h97zf Sep 9 18:37:19.356: INFO: got data: { "image": "nautilus.jpg" } Sep 9 18:37:19.356: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 9 18:37:19.356: INFO: update-demo-nautilus-h97zf is verified up and running STEP: using delete to clean up resources Sep 9 18:37:19.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-6z5rg' Sep 9 18:37:19.467: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 9 18:37:19.467: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Sep 9 18:37:19.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-6z5rg' Sep 9 18:37:19.562: INFO: stderr: "No resources found.\n" Sep 9 18:37:19.562: INFO: stdout: "" Sep 9 18:37:19.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-6z5rg -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Sep 9 18:37:19.674: INFO: stderr: "" Sep 9 18:37:19.674: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:37:19.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-6z5rg" for this suite. Sep 9 18:37:25.873: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:37:25.915: INFO: namespace: e2e-tests-kubectl-6z5rg, resource: bindings, ignored listing per whitelist Sep 9 18:37:25.975: INFO: namespace e2e-tests-kubectl-6z5rg deletion completed in 6.296449395s • [SLOW TEST:12.830 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:37:25.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-82015fef-f2cb-11ea-88c2-0242ac110007 STEP: Creating a pod to test consume secrets Sep 9 18:37:26.242: INFO: Waiting up to 5m0s for pod "pod-secrets-8209af72-f2cb-11ea-88c2-0242ac110007" in namespace "e2e-tests-secrets-vkjdh" to be "success or failure" Sep 9 18:37:26.245: INFO: Pod "pod-secrets-8209af72-f2cb-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 3.657946ms Sep 9 18:37:28.473: INFO: Pod "pod-secrets-8209af72-f2cb-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.231074193s Sep 9 18:37:30.476: INFO: Pod "pod-secrets-8209af72-f2cb-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.234699403s STEP: Saw pod success Sep 9 18:37:30.477: INFO: Pod "pod-secrets-8209af72-f2cb-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 18:37:30.479: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-8209af72-f2cb-11ea-88c2-0242ac110007 container secret-volume-test: STEP: delete the pod Sep 9 18:37:30.532: INFO: Waiting for pod pod-secrets-8209af72-f2cb-11ea-88c2-0242ac110007 to disappear Sep 9 18:37:30.539: INFO: Pod pod-secrets-8209af72-f2cb-11ea-88c2-0242ac110007 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:37:30.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-vkjdh" for this suite. Sep 9 18:37:36.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:37:36.580: INFO: namespace: e2e-tests-secrets-vkjdh, resource: bindings, ignored listing per whitelist Sep 9 18:37:36.632: INFO: namespace e2e-tests-secrets-vkjdh deletion completed in 6.089855077s • [SLOW TEST:10.657 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:37:36.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Sep 9 18:37:36.767: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:37:42.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-65hc4" for this suite. Sep 9 18:37:48.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:37:48.974: INFO: namespace: e2e-tests-init-container-65hc4, resource: bindings, ignored listing per whitelist Sep 9 18:37:49.023: INFO: namespace e2e-tests-init-container-65hc4 deletion completed in 6.091612566s • [SLOW TEST:12.390 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:37:49.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0909 18:37:59.211068 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Sep 9 18:37:59.211: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:37:59.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-n9ntv" for this suite. Sep 9 18:38:05.233: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:38:05.274: INFO: namespace: e2e-tests-gc-n9ntv, resource: bindings, ignored listing per whitelist Sep 9 18:38:05.312: INFO: namespace e2e-tests-gc-n9ntv deletion completed in 6.097339111s • [SLOW TEST:16.288 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:38:05.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Sep 9 18:38:05.414: INFO: Waiting up to 5m0s for pod "downward-api-9963e14a-f2cb-11ea-88c2-0242ac110007" in namespace "e2e-tests-downward-api-5w2pr" to be "success or failure" Sep 9 18:38:05.432: INFO: Pod "downward-api-9963e14a-f2cb-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 18.564141ms Sep 9 18:38:07.435: INFO: Pod "downward-api-9963e14a-f2cb-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02145064s Sep 9 18:38:09.439: INFO: Pod "downward-api-9963e14a-f2cb-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025256385s STEP: Saw pod success Sep 9 18:38:09.439: INFO: Pod "downward-api-9963e14a-f2cb-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 18:38:09.442: INFO: Trying to get logs from node hunter-worker pod downward-api-9963e14a-f2cb-11ea-88c2-0242ac110007 container dapi-container: STEP: delete the pod Sep 9 18:38:09.462: INFO: Waiting for pod downward-api-9963e14a-f2cb-11ea-88c2-0242ac110007 to disappear Sep 9 18:38:09.467: INFO: Pod downward-api-9963e14a-f2cb-11ea-88c2-0242ac110007 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:38:09.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-5w2pr" for this suite. Sep 9 18:38:15.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:38:15.571: INFO: namespace: e2e-tests-downward-api-5w2pr, resource: bindings, ignored listing per whitelist Sep 9 18:38:15.592: INFO: namespace e2e-tests-downward-api-5w2pr deletion completed in 6.122205543s • [SLOW TEST:10.280 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:38:15.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all Sep 9 18:38:15.685: INFO: Waiting up to 5m0s for pod "client-containers-9f831836-f2cb-11ea-88c2-0242ac110007" in namespace "e2e-tests-containers-4szt4" to be "success or failure" Sep 9 18:38:15.718: INFO: Pod "client-containers-9f831836-f2cb-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 33.013421ms Sep 9 18:38:17.722: INFO: Pod "client-containers-9f831836-f2cb-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036675889s Sep 9 18:38:19.726: INFO: Pod "client-containers-9f831836-f2cb-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040937015s STEP: Saw pod success Sep 9 18:38:19.726: INFO: Pod "client-containers-9f831836-f2cb-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 18:38:19.729: INFO: Trying to get logs from node hunter-worker pod client-containers-9f831836-f2cb-11ea-88c2-0242ac110007 container test-container: STEP: delete the pod Sep 9 18:38:19.751: INFO: Waiting for pod client-containers-9f831836-f2cb-11ea-88c2-0242ac110007 to disappear Sep 9 18:38:19.771: INFO: Pod client-containers-9f831836-f2cb-11ea-88c2-0242ac110007 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:38:19.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-4szt4" for this suite. Sep 9 18:38:25.808: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:38:25.821: INFO: namespace: e2e-tests-containers-4szt4, resource: bindings, ignored listing per whitelist Sep 9 18:38:25.884: INFO: namespace e2e-tests-containers-4szt4 deletion completed in 6.10983793s • [SLOW TEST:10.292 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:38:25.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Sep 9 18:38:30.579: INFO: Successfully updated pod "annotationupdatea5af10fd-f2cb-11ea-88c2-0242ac110007" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:38:32.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-tkxgl" for this suite. Sep 9 18:38:54.680: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:38:54.733: INFO: namespace: e2e-tests-downward-api-tkxgl, resource: bindings, ignored listing per whitelist Sep 9 18:38:54.778: INFO: namespace e2e-tests-downward-api-tkxgl deletion completed in 22.111515737s • [SLOW TEST:28.894 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:38:54.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:38:59.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-qmmn5" for this suite. Sep 9 18:39:41.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:39:41.050: INFO: namespace: e2e-tests-kubelet-test-qmmn5, resource: bindings, ignored listing per whitelist Sep 9 18:39:41.110: INFO: namespace e2e-tests-kubelet-test-qmmn5 deletion completed in 42.096920022s • [SLOW TEST:46.331 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:39:41.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Sep 9 18:39:45.822: INFO: Successfully updated pod "labelsupdated280c3e4-f2cb-11ea-88c2-0242ac110007" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:39:47.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-grmds" for this suite. Sep 9 18:40:09.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:40:09.903: INFO: namespace: e2e-tests-projected-grmds, resource: bindings, ignored listing per whitelist Sep 9 18:40:09.999: INFO: namespace e2e-tests-projected-grmds deletion completed in 22.146136497s • [SLOW TEST:28.889 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:40:09.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-hgkt STEP: Creating a pod to test atomic-volume-subpath Sep 9 18:40:10.169: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-hgkt" in namespace "e2e-tests-subpath-62dgx" to be "success or failure" Sep 9 18:40:10.185: INFO: Pod "pod-subpath-test-downwardapi-hgkt": Phase="Pending", Reason="", readiness=false. Elapsed: 16.125172ms Sep 9 18:40:12.205: INFO: Pod "pod-subpath-test-downwardapi-hgkt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035860543s Sep 9 18:40:14.209: INFO: Pod "pod-subpath-test-downwardapi-hgkt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039607977s Sep 9 18:40:16.277: INFO: Pod "pod-subpath-test-downwardapi-hgkt": Phase="Running", Reason="", readiness=false. Elapsed: 6.108077926s Sep 9 18:40:18.281: INFO: Pod "pod-subpath-test-downwardapi-hgkt": Phase="Running", Reason="", readiness=false. Elapsed: 8.111635716s Sep 9 18:40:20.285: INFO: Pod "pod-subpath-test-downwardapi-hgkt": Phase="Running", Reason="", readiness=false. Elapsed: 10.115917642s Sep 9 18:40:22.289: INFO: Pod "pod-subpath-test-downwardapi-hgkt": Phase="Running", Reason="", readiness=false. Elapsed: 12.1195068s Sep 9 18:40:24.293: INFO: Pod "pod-subpath-test-downwardapi-hgkt": Phase="Running", Reason="", readiness=false. Elapsed: 14.123357472s Sep 9 18:40:26.298: INFO: Pod "pod-subpath-test-downwardapi-hgkt": Phase="Running", Reason="", readiness=false. Elapsed: 16.12834459s Sep 9 18:40:28.302: INFO: Pod "pod-subpath-test-downwardapi-hgkt": Phase="Running", Reason="", readiness=false. Elapsed: 18.132774787s Sep 9 18:40:30.306: INFO: Pod "pod-subpath-test-downwardapi-hgkt": Phase="Running", Reason="", readiness=false. Elapsed: 20.136265209s Sep 9 18:40:32.310: INFO: Pod "pod-subpath-test-downwardapi-hgkt": Phase="Running", Reason="", readiness=false. Elapsed: 22.140361404s Sep 9 18:40:34.314: INFO: Pod "pod-subpath-test-downwardapi-hgkt": Phase="Running", Reason="", readiness=false. Elapsed: 24.144395738s Sep 9 18:40:36.317: INFO: Pod "pod-subpath-test-downwardapi-hgkt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.147997294s STEP: Saw pod success Sep 9 18:40:36.317: INFO: Pod "pod-subpath-test-downwardapi-hgkt" satisfied condition "success or failure" Sep 9 18:40:36.319: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-downwardapi-hgkt container test-container-subpath-downwardapi-hgkt: STEP: delete the pod Sep 9 18:40:36.405: INFO: Waiting for pod pod-subpath-test-downwardapi-hgkt to disappear Sep 9 18:40:36.413: INFO: Pod pod-subpath-test-downwardapi-hgkt no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-hgkt Sep 9 18:40:36.413: INFO: Deleting pod "pod-subpath-test-downwardapi-hgkt" in namespace "e2e-tests-subpath-62dgx" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:40:36.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-62dgx" for this suite. Sep 9 18:40:42.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:40:42.473: INFO: namespace: e2e-tests-subpath-62dgx, resource: bindings, ignored listing per whitelist Sep 9 18:40:42.508: INFO: namespace e2e-tests-subpath-62dgx deletion completed in 6.08828979s • [SLOW TEST:32.509 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:40:42.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-f71853b0-f2cb-11ea-88c2-0242ac110007 STEP: Creating a pod to test consume secrets Sep 9 18:40:42.648: INFO: Waiting up to 5m0s for pod "pod-secrets-f71ac219-f2cb-11ea-88c2-0242ac110007" in namespace "e2e-tests-secrets-rkf27" to be "success or failure" Sep 9 18:40:42.659: INFO: Pod "pod-secrets-f71ac219-f2cb-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.671471ms Sep 9 18:40:44.663: INFO: Pod "pod-secrets-f71ac219-f2cb-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014753387s Sep 9 18:40:46.666: INFO: Pod "pod-secrets-f71ac219-f2cb-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018200857s STEP: Saw pod success Sep 9 18:40:46.666: INFO: Pod "pod-secrets-f71ac219-f2cb-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 18:40:46.669: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-f71ac219-f2cb-11ea-88c2-0242ac110007 container secret-env-test: STEP: delete the pod Sep 9 18:40:46.687: INFO: Waiting for pod pod-secrets-f71ac219-f2cb-11ea-88c2-0242ac110007 to disappear Sep 9 18:40:46.698: INFO: Pod pod-secrets-f71ac219-f2cb-11ea-88c2-0242ac110007 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:40:46.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-rkf27" for this suite. Sep 9 18:40:52.738: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:40:52.769: INFO: namespace: e2e-tests-secrets-rkf27, resource: bindings, ignored listing per whitelist Sep 9 18:40:52.820: INFO: namespace e2e-tests-secrets-rkf27 deletion completed in 6.119416538s • [SLOW TEST:10.312 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:40:52.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Sep 9 18:40:57.493: INFO: Successfully updated pod "pod-update-activedeadlineseconds-fd42eea7-f2cb-11ea-88c2-0242ac110007" Sep 9 18:40:57.493: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-fd42eea7-f2cb-11ea-88c2-0242ac110007" in namespace "e2e-tests-pods-8rvb4" to be "terminated due to deadline exceeded" Sep 9 18:40:57.516: INFO: Pod "pod-update-activedeadlineseconds-fd42eea7-f2cb-11ea-88c2-0242ac110007": Phase="Running", Reason="", readiness=true. Elapsed: 22.886609ms Sep 9 18:40:59.520: INFO: Pod "pod-update-activedeadlineseconds-fd42eea7-f2cb-11ea-88c2-0242ac110007": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.02701109s Sep 9 18:40:59.520: INFO: Pod "pod-update-activedeadlineseconds-fd42eea7-f2cb-11ea-88c2-0242ac110007" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:40:59.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-8rvb4" for this suite. Sep 9 18:41:05.577: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:41:05.613: INFO: namespace: e2e-tests-pods-8rvb4, resource: bindings, ignored listing per whitelist Sep 9 18:41:05.652: INFO: namespace e2e-tests-pods-8rvb4 deletion completed in 6.128083885s • [SLOW TEST:12.832 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:41:05.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Sep 9 18:41:05.775: INFO: Waiting up to 5m0s for pod "downwardapi-volume-04e6964e-f2cc-11ea-88c2-0242ac110007" in namespace "e2e-tests-projected-466k5" to be "success or failure" Sep 9 18:41:05.816: INFO: Pod "downwardapi-volume-04e6964e-f2cc-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 41.12242ms Sep 9 18:41:07.820: INFO: Pod "downwardapi-volume-04e6964e-f2cc-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045088136s Sep 9 18:41:09.825: INFO: Pod "downwardapi-volume-04e6964e-f2cc-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049741573s STEP: Saw pod success Sep 9 18:41:09.825: INFO: Pod "downwardapi-volume-04e6964e-f2cc-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 18:41:09.828: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-04e6964e-f2cc-11ea-88c2-0242ac110007 container client-container: STEP: delete the pod Sep 9 18:41:09.874: INFO: Waiting for pod downwardapi-volume-04e6964e-f2cc-11ea-88c2-0242ac110007 to disappear Sep 9 18:41:09.890: INFO: Pod downwardapi-volume-04e6964e-f2cc-11ea-88c2-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:41:09.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-466k5" for this suite. Sep 9 18:41:15.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:41:15.941: INFO: namespace: e2e-tests-projected-466k5, resource: bindings, ignored listing per whitelist Sep 9 18:41:15.979: INFO: namespace e2e-tests-projected-466k5 deletion completed in 6.0850308s • [SLOW TEST:10.327 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:41:15.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments Sep 9 18:41:16.190: INFO: Waiting up to 5m0s for pod "client-containers-0b153941-f2cc-11ea-88c2-0242ac110007" in namespace "e2e-tests-containers-bcv77" to be "success or failure" Sep 9 18:41:16.196: INFO: Pod "client-containers-0b153941-f2cc-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 5.723146ms Sep 9 18:41:18.199: INFO: Pod "client-containers-0b153941-f2cc-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00863606s Sep 9 18:41:20.208: INFO: Pod "client-containers-0b153941-f2cc-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018144345s STEP: Saw pod success Sep 9 18:41:20.208: INFO: Pod "client-containers-0b153941-f2cc-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 18:41:20.210: INFO: Trying to get logs from node hunter-worker pod client-containers-0b153941-f2cc-11ea-88c2-0242ac110007 container test-container: STEP: delete the pod Sep 9 18:41:20.251: INFO: Waiting for pod client-containers-0b153941-f2cc-11ea-88c2-0242ac110007 to disappear Sep 9 18:41:20.262: INFO: Pod client-containers-0b153941-f2cc-11ea-88c2-0242ac110007 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:41:20.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-bcv77" for this suite. Sep 9 18:41:26.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:41:26.368: INFO: namespace: e2e-tests-containers-bcv77, resource: bindings, ignored listing per whitelist Sep 9 18:41:26.395: INFO: namespace e2e-tests-containers-bcv77 deletion completed in 6.131130451s • [SLOW TEST:10.416 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:41:26.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Sep 9 18:41:26.494: INFO: Creating ReplicaSet my-hostname-basic-1140f59e-f2cc-11ea-88c2-0242ac110007 Sep 9 18:41:26.571: INFO: Pod name my-hostname-basic-1140f59e-f2cc-11ea-88c2-0242ac110007: Found 0 pods out of 1 Sep 9 18:41:31.575: INFO: Pod name my-hostname-basic-1140f59e-f2cc-11ea-88c2-0242ac110007: Found 1 pods out of 1 Sep 9 18:41:31.575: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-1140f59e-f2cc-11ea-88c2-0242ac110007" is running Sep 9 18:41:31.579: INFO: Pod "my-hostname-basic-1140f59e-f2cc-11ea-88c2-0242ac110007-kcswz" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-09 18:41:26 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-09 18:41:29 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-09 18:41:29 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-09 18:41:26 +0000 UTC Reason: Message:}]) Sep 9 18:41:31.579: INFO: Trying to dial the pod Sep 9 18:41:36.591: INFO: Controller my-hostname-basic-1140f59e-f2cc-11ea-88c2-0242ac110007: Got expected result from replica 1 [my-hostname-basic-1140f59e-f2cc-11ea-88c2-0242ac110007-kcswz]: "my-hostname-basic-1140f59e-f2cc-11ea-88c2-0242ac110007-kcswz", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:41:36.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-2fmvv" for this suite. Sep 9 18:41:42.616: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:41:42.690: INFO: namespace: e2e-tests-replicaset-2fmvv, resource: bindings, ignored listing per whitelist Sep 9 18:41:42.695: INFO: namespace e2e-tests-replicaset-2fmvv deletion completed in 6.099885553s • [SLOW TEST:16.299 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:41:42.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Sep 9 18:41:42.772: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 9 18:41:42.833: INFO: Waiting for terminating namespaces to be deleted... Sep 9 18:41:42.835: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Sep 9 18:41:42.840: INFO: kindnet-4qkqp from kube-system started at 2020-09-05 13:37:22 +0000 UTC (1 container statuses recorded) Sep 9 18:41:42.840: INFO: Container kindnet-cni ready: true, restart count 0 Sep 9 18:41:42.840: INFO: kube-proxy-t9g4m from kube-system started at 2020-09-05 13:37:20 +0000 UTC (1 container statuses recorded) Sep 9 18:41:42.840: INFO: Container kube-proxy ready: true, restart count 0 Sep 9 18:41:42.840: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Sep 9 18:41:42.844: INFO: kube-proxy-vl5mq from kube-system started at 2020-09-05 13:37:20 +0000 UTC (1 container statuses recorded) Sep 9 18:41:42.844: INFO: Container kube-proxy ready: true, restart count 0 Sep 9 18:41:42.844: INFO: kindnet-z7tw7 from kube-system started at 2020-09-05 13:37:22 +0000 UTC (1 container statuses recorded) Sep 9 18:41:42.844: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.163331aaee33c1b7], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:41:43.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-sn857" for this suite. Sep 9 18:41:50.345: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:41:50.369: INFO: namespace: e2e-tests-sched-pred-sn857, resource: bindings, ignored listing per whitelist Sep 9 18:41:50.435: INFO: namespace e2e-tests-sched-pred-sn857 deletion completed in 6.568491916s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.740 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:41:50.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-1f93d5fa-f2cc-11ea-88c2-0242ac110007 STEP: Creating a pod to test consume configMaps Sep 9 18:41:50.550: INFO: Waiting up to 5m0s for pod "pod-configmaps-1f95db72-f2cc-11ea-88c2-0242ac110007" in namespace "e2e-tests-configmap-m8jf7" to be "success or failure" Sep 9 18:41:50.565: INFO: Pod "pod-configmaps-1f95db72-f2cc-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 15.189342ms Sep 9 18:41:52.619: INFO: Pod "pod-configmaps-1f95db72-f2cc-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068716764s Sep 9 18:41:54.622: INFO: Pod "pod-configmaps-1f95db72-f2cc-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.072185842s STEP: Saw pod success Sep 9 18:41:54.622: INFO: Pod "pod-configmaps-1f95db72-f2cc-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 18:41:54.624: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-1f95db72-f2cc-11ea-88c2-0242ac110007 container configmap-volume-test: STEP: delete the pod Sep 9 18:41:54.657: INFO: Waiting for pod pod-configmaps-1f95db72-f2cc-11ea-88c2-0242ac110007 to disappear Sep 9 18:41:54.689: INFO: Pod pod-configmaps-1f95db72-f2cc-11ea-88c2-0242ac110007 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:41:54.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-m8jf7" for this suite. Sep 9 18:42:00.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:42:00.755: INFO: namespace: e2e-tests-configmap-m8jf7, resource: bindings, ignored listing per whitelist Sep 9 18:42:00.786: INFO: namespace e2e-tests-configmap-m8jf7 deletion completed in 6.094067533s • [SLOW TEST:10.350 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:42:00.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Sep 9 18:42:00.951: INFO: Waiting up to 5m0s for pod "downwardapi-volume-25c8371b-f2cc-11ea-88c2-0242ac110007" in namespace "e2e-tests-downward-api-h4hdw" to be "success or failure" Sep 9 18:42:00.954: INFO: Pod "downwardapi-volume-25c8371b-f2cc-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 3.417168ms Sep 9 18:42:02.957: INFO: Pod "downwardapi-volume-25c8371b-f2cc-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006845733s Sep 9 18:42:04.962: INFO: Pod "downwardapi-volume-25c8371b-f2cc-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011122641s STEP: Saw pod success Sep 9 18:42:04.962: INFO: Pod "downwardapi-volume-25c8371b-f2cc-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 18:42:04.965: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-25c8371b-f2cc-11ea-88c2-0242ac110007 container client-container: STEP: delete the pod Sep 9 18:42:04.979: INFO: Waiting for pod downwardapi-volume-25c8371b-f2cc-11ea-88c2-0242ac110007 to disappear Sep 9 18:42:04.990: INFO: Pod downwardapi-volume-25c8371b-f2cc-11ea-88c2-0242ac110007 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:42:04.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-h4hdw" for this suite. Sep 9 18:42:11.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:42:11.138: INFO: namespace: e2e-tests-downward-api-h4hdw, resource: bindings, ignored listing per whitelist Sep 9 18:42:11.153: INFO: namespace e2e-tests-downward-api-h4hdw deletion completed in 6.159351337s • [SLOW TEST:10.367 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:42:11.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Sep 9 18:42:11.271: INFO: Waiting up to 5m0s for pod "pod-2bf0d8df-f2cc-11ea-88c2-0242ac110007" in namespace "e2e-tests-emptydir-cdmb8" to be "success or failure" Sep 9 18:42:11.296: INFO: Pod "pod-2bf0d8df-f2cc-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 25.196356ms Sep 9 18:42:13.300: INFO: Pod "pod-2bf0d8df-f2cc-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029018042s Sep 9 18:42:15.304: INFO: Pod "pod-2bf0d8df-f2cc-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033096248s STEP: Saw pod success Sep 9 18:42:15.304: INFO: Pod "pod-2bf0d8df-f2cc-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 18:42:15.307: INFO: Trying to get logs from node hunter-worker2 pod pod-2bf0d8df-f2cc-11ea-88c2-0242ac110007 container test-container: STEP: delete the pod Sep 9 18:42:15.325: INFO: Waiting for pod pod-2bf0d8df-f2cc-11ea-88c2-0242ac110007 to disappear Sep 9 18:42:15.329: INFO: Pod pod-2bf0d8df-f2cc-11ea-88c2-0242ac110007 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:42:15.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-cdmb8" for this suite. Sep 9 18:42:21.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:42:21.395: INFO: namespace: e2e-tests-emptydir-cdmb8, resource: bindings, ignored listing per whitelist Sep 9 18:42:21.420: INFO: namespace e2e-tests-emptydir-cdmb8 deletion completed in 6.088790426s • [SLOW TEST:10.267 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:42:21.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Sep 9 18:42:21.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-w2kfg' Sep 9 18:42:24.224: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Sep 9 18:42:24.224: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Sep 9 18:42:24.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-w2kfg' Sep 9 18:42:24.353: INFO: stderr: "" Sep 9 18:42:24.353: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:42:24.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-w2kfg" for this suite. Sep 9 18:42:46.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:42:46.400: INFO: namespace: e2e-tests-kubectl-w2kfg, resource: bindings, ignored listing per whitelist Sep 9 18:42:46.449: INFO: namespace e2e-tests-kubectl-w2kfg deletion completed in 22.091995431s • [SLOW TEST:25.029 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:42:46.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace Sep 9 18:42:50.631: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:43:14.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-9t5f6" for this suite. Sep 9 18:43:20.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:43:20.793: INFO: namespace: e2e-tests-namespaces-9t5f6, resource: bindings, ignored listing per whitelist Sep 9 18:43:20.836: INFO: namespace e2e-tests-namespaces-9t5f6 deletion completed in 6.086615986s STEP: Destroying namespace "e2e-tests-nsdeletetest-tb6w5" for this suite. Sep 9 18:43:20.838: INFO: Namespace e2e-tests-nsdeletetest-tb6w5 was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-gdwfx" for this suite. Sep 9 18:43:26.855: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:43:26.920: INFO: namespace: e2e-tests-nsdeletetest-gdwfx, resource: bindings, ignored listing per whitelist Sep 9 18:43:26.926: INFO: namespace e2e-tests-nsdeletetest-gdwfx deletion completed in 6.088078645s • [SLOW TEST:40.477 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:43:26.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-59182068-f2cc-11ea-88c2-0242ac110007 STEP: Creating a pod to test consume secrets Sep 9 18:43:27.058: INFO: Waiting up to 5m0s for pod "pod-secrets-591a1906-f2cc-11ea-88c2-0242ac110007" in namespace "e2e-tests-secrets-smtql" to be "success or failure" Sep 9 18:43:27.081: INFO: Pod "pod-secrets-591a1906-f2cc-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 22.801438ms Sep 9 18:43:29.159: INFO: Pod "pod-secrets-591a1906-f2cc-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101560501s Sep 9 18:43:31.164: INFO: Pod "pod-secrets-591a1906-f2cc-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.106116475s STEP: Saw pod success Sep 9 18:43:31.164: INFO: Pod "pod-secrets-591a1906-f2cc-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 18:43:31.167: INFO: Trying to get logs from node hunter-worker pod pod-secrets-591a1906-f2cc-11ea-88c2-0242ac110007 container secret-volume-test: STEP: delete the pod Sep 9 18:43:31.192: INFO: Waiting for pod pod-secrets-591a1906-f2cc-11ea-88c2-0242ac110007 to disappear Sep 9 18:43:31.196: INFO: Pod pod-secrets-591a1906-f2cc-11ea-88c2-0242ac110007 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:43:31.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-smtql" for this suite. Sep 9 18:43:37.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:43:37.339: INFO: namespace: e2e-tests-secrets-smtql, resource: bindings, ignored listing per whitelist Sep 9 18:43:37.381: INFO: namespace e2e-tests-secrets-smtql deletion completed in 6.18061787s • [SLOW TEST:10.453 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:43:37.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Sep 9 18:43:37.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' Sep 9 18:43:37.659: INFO: stderr: "" Sep 9 18:43:37.659: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-09-07T10:49:09Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" Sep 9 18:43:37.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qwvwf' Sep 9 18:43:37.915: INFO: stderr: "" Sep 9 18:43:37.915: INFO: stdout: "replicationcontroller/redis-master created\n" Sep 9 18:43:37.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qwvwf' Sep 9 18:43:38.190: INFO: stderr: "" Sep 9 18:43:38.190: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Sep 9 18:43:39.243: INFO: Selector matched 1 pods for map[app:redis] Sep 9 18:43:39.243: INFO: Found 0 / 1 Sep 9 18:43:40.193: INFO: Selector matched 1 pods for map[app:redis] Sep 9 18:43:40.194: INFO: Found 0 / 1 Sep 9 18:43:41.195: INFO: Selector matched 1 pods for map[app:redis] Sep 9 18:43:41.195: INFO: Found 1 / 1 Sep 9 18:43:41.195: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Sep 9 18:43:41.198: INFO: Selector matched 1 pods for map[app:redis] Sep 9 18:43:41.198: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Sep 9 18:43:41.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-gxx7w --namespace=e2e-tests-kubectl-qwvwf' Sep 9 18:43:41.323: INFO: stderr: "" Sep 9 18:43:41.323: INFO: stdout: "Name: redis-master-gxx7w\nNamespace: e2e-tests-kubectl-qwvwf\nPriority: 0\nPriorityClassName: \nNode: hunter-worker2/172.18.0.7\nStart Time: Wed, 09 Sep 2020 18:43:37 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.11\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://aa98c7fea59e4fef0adbb18d2ebf182bf6119fa495503a8338fd880fa00854cf\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 09 Sep 2020 18:43:40 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-4xk9s (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-4xk9s:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-4xk9s\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned e2e-tests-kubectl-qwvwf/redis-master-gxx7w to hunter-worker2\n Normal Pulled 2s kubelet, hunter-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 1s kubelet, hunter-worker2 Created container\n Normal Started 1s kubelet, hunter-worker2 Started container\n" Sep 9 18:43:41.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-qwvwf' Sep 9 18:43:41.439: INFO: stderr: "" Sep 9 18:43:41.439: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-qwvwf\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: redis-master-gxx7w\n" Sep 9 18:43:41.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-qwvwf' Sep 9 18:43:41.548: INFO: stderr: "" Sep 9 18:43:41.548: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-qwvwf\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.111.94.250\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.2.11:6379\nSession Affinity: None\nEvents: \n" Sep 9 18:43:41.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane' Sep 9 18:43:41.675: INFO: stderr: "" Sep 9 18:43:41.675: INFO: stdout: "Name: hunter-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/hostname=hunter-control-plane\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sat, 05 Sep 2020 13:36:48 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Wed, 09 Sep 2020 18:43:34 +0000 Sat, 05 Sep 2020 13:36:42 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 09 Sep 2020 18:43:34 +0000 Sat, 05 Sep 2020 13:36:42 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 09 Sep 2020 18:43:34 +0000 Sat, 05 Sep 2020 13:36:42 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 09 Sep 2020 18:43:34 +0000 Sat, 05 Sep 2020 13:37:39 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.6\n Hostname: hunter-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nSystem Info:\n Machine ID: 44138625b7954241b3c3f092d0954773\n System UUID: fca70277-c2bb-4584-a99b-46841510eb2f\n Boot ID: 16f80d7c-7741-4040-9735-0d166ad57c21\n Kernel Version: 4.15.0-115-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.3-14-g449e9269\n Kubelet Version: v1.13.12\n Kube-Proxy Version: v1.13.12\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-54ff9cd656-gv2l2 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 4d5h\n kube-system coredns-54ff9cd656-t76vb 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 4d5h\n kube-system etcd-hunter-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4d5h\n kube-system kindnet-78dfs 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 4d5h\n kube-system kube-apiserver-hunter-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 4d5h\n kube-system kube-controller-manager-hunter-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 4d5h\n kube-system kube-proxy-qmxds 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4d5h\n kube-system kube-scheduler-hunter-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 4d5h\n local-path-storage local-path-provisioner-674595c7-lmd9b 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4d5h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Sep 9 18:43:41.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-qwvwf' Sep 9 18:43:41.786: INFO: stderr: "" Sep 9 18:43:41.786: INFO: stdout: "Name: e2e-tests-kubectl-qwvwf\nLabels: e2e-framework=kubectl\n e2e-run=c4a8a748-f2c5-11ea-88c2-0242ac110007\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:43:41.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-qwvwf" for this suite. Sep 9 18:44:03.801: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:44:03.831: INFO: namespace: e2e-tests-kubectl-qwvwf, resource: bindings, ignored listing per whitelist Sep 9 18:44:03.876: INFO: namespace e2e-tests-kubectl-qwvwf deletion completed in 22.086028646s • [SLOW TEST:26.495 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:44:03.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:44:36.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-7wsr7" for this suite. Sep 9 18:44:42.782: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:44:42.792: INFO: namespace: e2e-tests-container-runtime-7wsr7, resource: bindings, ignored listing per whitelist Sep 9 18:44:42.892: INFO: namespace e2e-tests-container-runtime-7wsr7 deletion completed in 6.123319303s • [SLOW TEST:39.015 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:44:42.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Sep 9 18:44:43.041: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8665306f-f2cc-11ea-88c2-0242ac110007" in namespace "e2e-tests-projected-gd2b7" to be "success or failure" Sep 9 18:44:43.059: INFO: Pod "downwardapi-volume-8665306f-f2cc-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 17.416193ms Sep 9 18:44:45.063: INFO: Pod "downwardapi-volume-8665306f-f2cc-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021546498s Sep 9 18:44:47.066: INFO: Pod "downwardapi-volume-8665306f-f2cc-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024976256s STEP: Saw pod success Sep 9 18:44:47.066: INFO: Pod "downwardapi-volume-8665306f-f2cc-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 18:44:47.068: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-8665306f-f2cc-11ea-88c2-0242ac110007 container client-container: STEP: delete the pod Sep 9 18:44:47.101: INFO: Waiting for pod downwardapi-volume-8665306f-f2cc-11ea-88c2-0242ac110007 to disappear Sep 9 18:44:47.117: INFO: Pod downwardapi-volume-8665306f-f2cc-11ea-88c2-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:44:47.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gd2b7" for this suite. Sep 9 18:44:53.132: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:44:53.159: INFO: namespace: e2e-tests-projected-gd2b7, resource: bindings, ignored listing per whitelist Sep 9 18:44:53.208: INFO: namespace e2e-tests-projected-gd2b7 deletion completed in 6.086567088s • [SLOW TEST:10.316 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:44:53.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Sep 9 18:44:53.361: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Sep 9 18:44:53.373: INFO: Number of nodes with available pods: 0 Sep 9 18:44:53.373: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Sep 9 18:44:53.407: INFO: Number of nodes with available pods: 0 Sep 9 18:44:53.407: INFO: Node hunter-worker is running more than one daemon pod Sep 9 18:44:54.410: INFO: Number of nodes with available pods: 0 Sep 9 18:44:54.410: INFO: Node hunter-worker is running more than one daemon pod Sep 9 18:44:55.411: INFO: Number of nodes with available pods: 0 Sep 9 18:44:55.411: INFO: Node hunter-worker is running more than one daemon pod Sep 9 18:44:56.411: INFO: Number of nodes with available pods: 0 Sep 9 18:44:56.411: INFO: Node hunter-worker is running more than one daemon pod Sep 9 18:44:57.411: INFO: Number of nodes with available pods: 1 Sep 9 18:44:57.411: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Sep 9 18:44:57.483: INFO: Number of nodes with available pods: 1 Sep 9 18:44:57.484: INFO: Number of running nodes: 0, number of available pods: 1 Sep 9 18:44:58.488: INFO: Number of nodes with available pods: 0 Sep 9 18:44:58.488: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Sep 9 18:44:58.496: INFO: Number of nodes with available pods: 0 Sep 9 18:44:58.496: INFO: Node hunter-worker is running more than one daemon pod Sep 9 18:44:59.501: INFO: Number of nodes with available pods: 0 Sep 9 18:44:59.501: INFO: Node hunter-worker is running more than one daemon pod Sep 9 18:45:00.501: INFO: Number of nodes with available pods: 0 Sep 9 18:45:00.501: INFO: Node hunter-worker is running more than one daemon pod Sep 9 18:45:01.501: INFO: Number of nodes with available pods: 0 Sep 9 18:45:01.501: INFO: Node hunter-worker is running more than one daemon pod Sep 9 18:45:02.501: INFO: Number of nodes with available pods: 0 Sep 9 18:45:02.501: INFO: Node hunter-worker is running more than one daemon pod Sep 9 18:45:03.500: INFO: Number of nodes with available pods: 0 Sep 9 18:45:03.500: INFO: Node hunter-worker is running more than one daemon pod Sep 9 18:45:04.500: INFO: Number of nodes with available pods: 0 Sep 9 18:45:04.500: INFO: Node hunter-worker is running more than one daemon pod Sep 9 18:45:05.501: INFO: Number of nodes with available pods: 0 Sep 9 18:45:05.501: INFO: Node hunter-worker is running more than one daemon pod Sep 9 18:45:06.500: INFO: Number of nodes with available pods: 0 Sep 9 18:45:06.500: INFO: Node hunter-worker is running more than one daemon pod Sep 9 18:45:07.501: INFO: Number of nodes with available pods: 0 Sep 9 18:45:07.501: INFO: Node hunter-worker is running more than one daemon pod Sep 9 18:45:08.500: INFO: Number of nodes with available pods: 0 Sep 9 18:45:08.501: INFO: Node hunter-worker is running more than one daemon pod Sep 9 18:45:09.525: INFO: Number of nodes with available pods: 0 Sep 9 18:45:09.525: INFO: Node hunter-worker is running more than one daemon pod Sep 9 18:45:10.501: INFO: Number of nodes with available pods: 0 Sep 9 18:45:10.501: INFO: Node hunter-worker is running more than one daemon pod Sep 9 18:45:11.501: INFO: Number of nodes with available pods: 0 Sep 9 18:45:11.501: INFO: Node hunter-worker is running more than one daemon pod Sep 9 18:45:12.501: INFO: Number of nodes with available pods: 0 Sep 9 18:45:12.501: INFO: Node hunter-worker is running more than one daemon pod Sep 9 18:45:13.500: INFO: Number of nodes with available pods: 1 Sep 9 18:45:13.500: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-jjp9d, will wait for the garbage collector to delete the pods Sep 9 18:45:13.565: INFO: Deleting DaemonSet.extensions daemon-set took: 6.433504ms Sep 9 18:45:13.665: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.246661ms Sep 9 18:45:19.489: INFO: Number of nodes with available pods: 0 Sep 9 18:45:19.489: INFO: Number of running nodes: 0, number of available pods: 0 Sep 9 18:45:19.494: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-jjp9d/daemonsets","resourceVersion":"737010"},"items":null} Sep 9 18:45:19.496: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-jjp9d/pods","resourceVersion":"737010"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:45:19.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-jjp9d" for this suite. Sep 9 18:45:25.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:45:25.577: INFO: namespace: e2e-tests-daemonsets-jjp9d, resource: bindings, ignored listing per whitelist Sep 9 18:45:25.630: INFO: namespace e2e-tests-daemonsets-jjp9d deletion completed in 6.093330393s • [SLOW TEST:32.422 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:45:25.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-9fd945c8-f2cc-11ea-88c2-0242ac110007 STEP: Creating a pod to test consume secrets Sep 9 18:45:25.766: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9fddd76f-f2cc-11ea-88c2-0242ac110007" in namespace "e2e-tests-projected-87qpd" to be "success or failure" Sep 9 18:45:25.771: INFO: Pod "pod-projected-secrets-9fddd76f-f2cc-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.312344ms Sep 9 18:45:27.789: INFO: Pod "pod-projected-secrets-9fddd76f-f2cc-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022888037s Sep 9 18:45:29.801: INFO: Pod "pod-projected-secrets-9fddd76f-f2cc-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034807832s STEP: Saw pod success Sep 9 18:45:29.801: INFO: Pod "pod-projected-secrets-9fddd76f-f2cc-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 18:45:29.803: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-9fddd76f-f2cc-11ea-88c2-0242ac110007 container projected-secret-volume-test: STEP: delete the pod Sep 9 18:45:29.837: INFO: Waiting for pod pod-projected-secrets-9fddd76f-f2cc-11ea-88c2-0242ac110007 to disappear Sep 9 18:45:29.855: INFO: Pod pod-projected-secrets-9fddd76f-f2cc-11ea-88c2-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:45:29.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-87qpd" for this suite. Sep 9 18:45:35.870: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:45:35.885: INFO: namespace: e2e-tests-projected-87qpd, resource: bindings, ignored listing per whitelist Sep 9 18:45:35.978: INFO: namespace e2e-tests-projected-87qpd deletion completed in 6.119987394s • [SLOW TEST:10.348 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:45:35.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-qbcmm.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-qbcmm.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-qbcmm.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-qbcmm.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-qbcmm.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-qbcmm.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 9 18:45:42.250: INFO: DNS probes using e2e-tests-dns-qbcmm/dns-test-a60eb97d-f2cc-11ea-88c2-0242ac110007 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:45:42.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-qbcmm" for this suite. Sep 9 18:45:48.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:45:48.508: INFO: namespace: e2e-tests-dns-qbcmm, resource: bindings, ignored listing per whitelist Sep 9 18:45:48.529: INFO: namespace e2e-tests-dns-qbcmm deletion completed in 6.169906243s • [SLOW TEST:12.550 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:45:48.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Sep 9 18:45:48.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-7lr6m' Sep 9 18:45:48.907: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Sep 9 18:45:48.907: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 Sep 9 18:45:50.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-7lr6m' Sep 9 18:45:51.094: INFO: stderr: "" Sep 9 18:45:51.094: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:45:51.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-7lr6m" for this suite. Sep 9 18:45:57.211: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:45:57.253: INFO: namespace: e2e-tests-kubectl-7lr6m, resource: bindings, ignored listing per whitelist Sep 9 18:45:57.286: INFO: namespace e2e-tests-kubectl-7lr6m deletion completed in 6.164723383s • [SLOW TEST:8.757 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:45:57.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Sep 9 18:45:57.371: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-22z5p,SelfLink:/api/v1/namespaces/e2e-tests-watch-22z5p/configmaps/e2e-watch-test-configmap-a,UID:b2b34431-f2cc-11ea-b060-0242ac120006,ResourceVersion:737205,Generation:0,CreationTimestamp:2020-09-09 18:45:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Sep 9 18:45:57.371: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-22z5p,SelfLink:/api/v1/namespaces/e2e-tests-watch-22z5p/configmaps/e2e-watch-test-configmap-a,UID:b2b34431-f2cc-11ea-b060-0242ac120006,ResourceVersion:737205,Generation:0,CreationTimestamp:2020-09-09 18:45:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Sep 9 18:46:07.379: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-22z5p,SelfLink:/api/v1/namespaces/e2e-tests-watch-22z5p/configmaps/e2e-watch-test-configmap-a,UID:b2b34431-f2cc-11ea-b060-0242ac120006,ResourceVersion:737225,Generation:0,CreationTimestamp:2020-09-09 18:45:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Sep 9 18:46:07.379: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-22z5p,SelfLink:/api/v1/namespaces/e2e-tests-watch-22z5p/configmaps/e2e-watch-test-configmap-a,UID:b2b34431-f2cc-11ea-b060-0242ac120006,ResourceVersion:737225,Generation:0,CreationTimestamp:2020-09-09 18:45:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Sep 9 18:46:17.387: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-22z5p,SelfLink:/api/v1/namespaces/e2e-tests-watch-22z5p/configmaps/e2e-watch-test-configmap-a,UID:b2b34431-f2cc-11ea-b060-0242ac120006,ResourceVersion:737245,Generation:0,CreationTimestamp:2020-09-09 18:45:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Sep 9 18:46:17.387: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-22z5p,SelfLink:/api/v1/namespaces/e2e-tests-watch-22z5p/configmaps/e2e-watch-test-configmap-a,UID:b2b34431-f2cc-11ea-b060-0242ac120006,ResourceVersion:737245,Generation:0,CreationTimestamp:2020-09-09 18:45:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Sep 9 18:46:27.394: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-22z5p,SelfLink:/api/v1/namespaces/e2e-tests-watch-22z5p/configmaps/e2e-watch-test-configmap-a,UID:b2b34431-f2cc-11ea-b060-0242ac120006,ResourceVersion:737265,Generation:0,CreationTimestamp:2020-09-09 18:45:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Sep 9 18:46:27.394: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-22z5p,SelfLink:/api/v1/namespaces/e2e-tests-watch-22z5p/configmaps/e2e-watch-test-configmap-a,UID:b2b34431-f2cc-11ea-b060-0242ac120006,ResourceVersion:737265,Generation:0,CreationTimestamp:2020-09-09 18:45:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Sep 9 18:46:37.401: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-22z5p,SelfLink:/api/v1/namespaces/e2e-tests-watch-22z5p/configmaps/e2e-watch-test-configmap-b,UID:ca90dd79-f2cc-11ea-b060-0242ac120006,ResourceVersion:737285,Generation:0,CreationTimestamp:2020-09-09 18:46:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Sep 9 18:46:37.401: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-22z5p,SelfLink:/api/v1/namespaces/e2e-tests-watch-22z5p/configmaps/e2e-watch-test-configmap-b,UID:ca90dd79-f2cc-11ea-b060-0242ac120006,ResourceVersion:737285,Generation:0,CreationTimestamp:2020-09-09 18:46:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Sep 9 18:46:47.408: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-22z5p,SelfLink:/api/v1/namespaces/e2e-tests-watch-22z5p/configmaps/e2e-watch-test-configmap-b,UID:ca90dd79-f2cc-11ea-b060-0242ac120006,ResourceVersion:737305,Generation:0,CreationTimestamp:2020-09-09 18:46:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Sep 9 18:46:47.409: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-22z5p,SelfLink:/api/v1/namespaces/e2e-tests-watch-22z5p/configmaps/e2e-watch-test-configmap-b,UID:ca90dd79-f2cc-11ea-b060-0242ac120006,ResourceVersion:737305,Generation:0,CreationTimestamp:2020-09-09 18:46:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:46:57.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-22z5p" for this suite. Sep 9 18:47:03.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:47:03.475: INFO: namespace: e2e-tests-watch-22z5p, resource: bindings, ignored listing per whitelist Sep 9 18:47:03.497: INFO: namespace e2e-tests-watch-22z5p deletion completed in 6.08250258s • [SLOW TEST:66.211 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:47:03.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-da3357c6-f2cc-11ea-88c2-0242ac110007 STEP: Creating a pod to test consume configMaps Sep 9 18:47:03.643: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-da34a99f-f2cc-11ea-88c2-0242ac110007" in namespace "e2e-tests-projected-bhbvl" to be "success or failure" Sep 9 18:47:03.647: INFO: Pod "pod-projected-configmaps-da34a99f-f2cc-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.553952ms Sep 9 18:47:05.651: INFO: Pod "pod-projected-configmaps-da34a99f-f2cc-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008757981s Sep 9 18:47:07.656: INFO: Pod "pod-projected-configmaps-da34a99f-f2cc-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013326126s STEP: Saw pod success Sep 9 18:47:07.656: INFO: Pod "pod-projected-configmaps-da34a99f-f2cc-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 18:47:07.659: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-da34a99f-f2cc-11ea-88c2-0242ac110007 container projected-configmap-volume-test: STEP: delete the pod Sep 9 18:47:07.726: INFO: Waiting for pod pod-projected-configmaps-da34a99f-f2cc-11ea-88c2-0242ac110007 to disappear Sep 9 18:47:07.737: INFO: Pod pod-projected-configmaps-da34a99f-f2cc-11ea-88c2-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:47:07.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bhbvl" for this suite. Sep 9 18:47:13.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:47:13.766: INFO: namespace: e2e-tests-projected-bhbvl, resource: bindings, ignored listing per whitelist Sep 9 18:47:13.853: INFO: namespace e2e-tests-projected-bhbvl deletion completed in 6.112751172s • [SLOW TEST:10.356 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:47:13.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:47:14.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-scfkt" for this suite. Sep 9 18:47:20.081: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:47:20.099: INFO: namespace: e2e-tests-kubelet-test-scfkt, resource: bindings, ignored listing per whitelist Sep 9 18:47:20.159: INFO: namespace e2e-tests-kubelet-test-scfkt deletion completed in 6.109617642s • [SLOW TEST:6.305 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:47:20.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Sep 9 18:47:20.270: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e41bc6d1-f2cc-11ea-88c2-0242ac110007" in namespace "e2e-tests-projected-tvlr2" to be "success or failure" Sep 9 18:47:20.276: INFO: Pod "downwardapi-volume-e41bc6d1-f2cc-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.196515ms Sep 9 18:47:22.294: INFO: Pod "downwardapi-volume-e41bc6d1-f2cc-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023759539s Sep 9 18:47:24.297: INFO: Pod "downwardapi-volume-e41bc6d1-f2cc-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027351477s STEP: Saw pod success Sep 9 18:47:24.297: INFO: Pod "downwardapi-volume-e41bc6d1-f2cc-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 18:47:24.300: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-e41bc6d1-f2cc-11ea-88c2-0242ac110007 container client-container: STEP: delete the pod Sep 9 18:47:24.375: INFO: Waiting for pod downwardapi-volume-e41bc6d1-f2cc-11ea-88c2-0242ac110007 to disappear Sep 9 18:47:24.395: INFO: Pod downwardapi-volume-e41bc6d1-f2cc-11ea-88c2-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:47:24.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-tvlr2" for this suite. Sep 9 18:47:30.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:47:30.460: INFO: namespace: e2e-tests-projected-tvlr2, resource: bindings, ignored listing per whitelist Sep 9 18:47:30.499: INFO: namespace e2e-tests-projected-tvlr2 deletion completed in 6.099945031s • [SLOW TEST:10.341 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:47:30.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller Sep 9 18:47:30.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qfnqr' Sep 9 18:47:30.926: INFO: stderr: "" Sep 9 18:47:30.926: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Sep 9 18:47:30.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qfnqr' Sep 9 18:47:31.098: INFO: stderr: "" Sep 9 18:47:31.098: INFO: stdout: "update-demo-nautilus-h8klv update-demo-nautilus-rnprc " Sep 9 18:47:31.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h8klv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qfnqr' Sep 9 18:47:31.215: INFO: stderr: "" Sep 9 18:47:31.215: INFO: stdout: "" Sep 9 18:47:31.215: INFO: update-demo-nautilus-h8klv is created but not running Sep 9 18:47:36.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qfnqr' Sep 9 18:47:36.323: INFO: stderr: "" Sep 9 18:47:36.323: INFO: stdout: "update-demo-nautilus-h8klv update-demo-nautilus-rnprc " Sep 9 18:47:36.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h8klv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qfnqr' Sep 9 18:47:36.438: INFO: stderr: "" Sep 9 18:47:36.438: INFO: stdout: "true" Sep 9 18:47:36.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h8klv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qfnqr' Sep 9 18:47:36.545: INFO: stderr: "" Sep 9 18:47:36.545: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 9 18:47:36.545: INFO: validating pod update-demo-nautilus-h8klv Sep 9 18:47:36.549: INFO: got data: { "image": "nautilus.jpg" } Sep 9 18:47:36.549: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 9 18:47:36.549: INFO: update-demo-nautilus-h8klv is verified up and running Sep 9 18:47:36.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rnprc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qfnqr' Sep 9 18:47:36.654: INFO: stderr: "" Sep 9 18:47:36.654: INFO: stdout: "true" Sep 9 18:47:36.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rnprc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qfnqr' Sep 9 18:47:36.747: INFO: stderr: "" Sep 9 18:47:36.747: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 9 18:47:36.747: INFO: validating pod update-demo-nautilus-rnprc Sep 9 18:47:36.750: INFO: got data: { "image": "nautilus.jpg" } Sep 9 18:47:36.750: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 9 18:47:36.750: INFO: update-demo-nautilus-rnprc is verified up and running STEP: rolling-update to new replication controller Sep 9 18:47:36.752: INFO: scanned /root for discovery docs: Sep 9 18:47:36.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-qfnqr' Sep 9 18:47:59.396: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Sep 9 18:47:59.396: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Sep 9 18:47:59.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qfnqr' Sep 9 18:47:59.507: INFO: stderr: "" Sep 9 18:47:59.507: INFO: stdout: "update-demo-kitten-7vbrn update-demo-kitten-hc8kq " Sep 9 18:47:59.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-7vbrn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qfnqr' Sep 9 18:47:59.630: INFO: stderr: "" Sep 9 18:47:59.630: INFO: stdout: "true" Sep 9 18:47:59.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-7vbrn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qfnqr' Sep 9 18:47:59.731: INFO: stderr: "" Sep 9 18:47:59.731: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Sep 9 18:47:59.731: INFO: validating pod update-demo-kitten-7vbrn Sep 9 18:47:59.757: INFO: got data: { "image": "kitten.jpg" } Sep 9 18:47:59.757: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Sep 9 18:47:59.757: INFO: update-demo-kitten-7vbrn is verified up and running Sep 9 18:47:59.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-hc8kq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qfnqr' Sep 9 18:47:59.854: INFO: stderr: "" Sep 9 18:47:59.854: INFO: stdout: "true" Sep 9 18:47:59.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-hc8kq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qfnqr' Sep 9 18:47:59.957: INFO: stderr: "" Sep 9 18:47:59.957: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Sep 9 18:47:59.957: INFO: validating pod update-demo-kitten-hc8kq Sep 9 18:47:59.990: INFO: got data: { "image": "kitten.jpg" } Sep 9 18:47:59.990: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Sep 9 18:47:59.990: INFO: update-demo-kitten-hc8kq is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:47:59.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-qfnqr" for this suite. Sep 9 18:48:22.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:48:22.020: INFO: namespace: e2e-tests-kubectl-qfnqr, resource: bindings, ignored listing per whitelist Sep 9 18:48:22.139: INFO: namespace e2e-tests-kubectl-qfnqr deletion completed in 22.14518958s • [SLOW TEST:51.639 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:48:22.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Sep 9 18:48:22.254: INFO: Waiting up to 5m0s for pod "downwardapi-volume-090c4116-f2cd-11ea-88c2-0242ac110007" in namespace "e2e-tests-projected-x7pj7" to be "success or failure" Sep 9 18:48:22.260: INFO: Pod "downwardapi-volume-090c4116-f2cd-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 5.408939ms Sep 9 18:48:24.295: INFO: Pod "downwardapi-volume-090c4116-f2cd-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040229088s Sep 9 18:48:26.298: INFO: Pod "downwardapi-volume-090c4116-f2cd-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044178363s STEP: Saw pod success Sep 9 18:48:26.299: INFO: Pod "downwardapi-volume-090c4116-f2cd-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 18:48:26.302: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-090c4116-f2cd-11ea-88c2-0242ac110007 container client-container: STEP: delete the pod Sep 9 18:48:26.465: INFO: Waiting for pod downwardapi-volume-090c4116-f2cd-11ea-88c2-0242ac110007 to disappear Sep 9 18:48:26.528: INFO: Pod downwardapi-volume-090c4116-f2cd-11ea-88c2-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:48:26.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-x7pj7" for this suite. Sep 9 18:48:32.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:48:32.674: INFO: namespace: e2e-tests-projected-x7pj7, resource: bindings, ignored listing per whitelist Sep 9 18:48:32.703: INFO: namespace e2e-tests-projected-x7pj7 deletion completed in 6.169977711s • [SLOW TEST:10.564 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:48:32.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-rw4hp in namespace e2e-tests-proxy-nstvs I0909 18:48:32.896206 6 runners.go:184] Created replication controller with name: proxy-service-rw4hp, namespace: e2e-tests-proxy-nstvs, replica count: 1 I0909 18:48:33.946658 6 runners.go:184] proxy-service-rw4hp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0909 18:48:34.946883 6 runners.go:184] proxy-service-rw4hp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0909 18:48:35.947175 6 runners.go:184] proxy-service-rw4hp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0909 18:48:36.947418 6 runners.go:184] proxy-service-rw4hp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0909 18:48:37.947621 6 runners.go:184] proxy-service-rw4hp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0909 18:48:38.947904 6 runners.go:184] proxy-service-rw4hp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0909 18:48:39.948269 6 runners.go:184] proxy-service-rw4hp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0909 18:48:40.948504 6 runners.go:184] proxy-service-rw4hp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0909 18:48:41.948750 6 runners.go:184] proxy-service-rw4hp Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 9 18:48:41.952: INFO: setup took 9.108474275s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Sep 9 18:48:41.960: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-nstvs/pods/http:proxy-service-rw4hp-dm88t:160/proxy/: foo (200; 7.777233ms) Sep 9 18:48:41.960: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-nstvs/services/proxy-service-rw4hp:portname2/proxy/: bar (200; 7.978223ms) Sep 9 18:48:41.960: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-nstvs/pods/proxy-service-rw4hp-dm88t:1080/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-1a2ae6ef-f2cd-11ea-88c2-0242ac110007 STEP: Creating a pod to test consume secrets Sep 9 18:48:50.969: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1a2b77db-f2cd-11ea-88c2-0242ac110007" in namespace "e2e-tests-projected-b6hz7" to be "success or failure" Sep 9 18:48:50.979: INFO: Pod "pod-projected-secrets-1a2b77db-f2cd-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.837537ms Sep 9 18:48:52.983: INFO: Pod "pod-projected-secrets-1a2b77db-f2cd-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013924886s Sep 9 18:48:54.997: INFO: Pod "pod-projected-secrets-1a2b77db-f2cd-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027985986s STEP: Saw pod success Sep 9 18:48:54.997: INFO: Pod "pod-projected-secrets-1a2b77db-f2cd-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 18:48:55.000: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-1a2b77db-f2cd-11ea-88c2-0242ac110007 container secret-volume-test: STEP: delete the pod Sep 9 18:48:55.022: INFO: Waiting for pod pod-projected-secrets-1a2b77db-f2cd-11ea-88c2-0242ac110007 to disappear Sep 9 18:48:55.045: INFO: Pod pod-projected-secrets-1a2b77db-f2cd-11ea-88c2-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:48:55.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-b6hz7" for this suite. Sep 9 18:49:01.067: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:49:01.111: INFO: namespace: e2e-tests-projected-b6hz7, resource: bindings, ignored listing per whitelist Sep 9 18:49:01.149: INFO: namespace e2e-tests-projected-b6hz7 deletion completed in 6.099443606s • [SLOW TEST:10.367 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:49:01.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-2055eba9-f2cd-11ea-88c2-0242ac110007 STEP: Creating configMap with name cm-test-opt-upd-2055ec73-f2cd-11ea-88c2-0242ac110007 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-2055eba9-f2cd-11ea-88c2-0242ac110007 STEP: Updating configmap cm-test-opt-upd-2055ec73-f2cd-11ea-88c2-0242ac110007 STEP: Creating configMap with name cm-test-opt-create-2055ed95-f2cd-11ea-88c2-0242ac110007 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:49:09.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vprw9" for this suite. Sep 9 18:49:33.486: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:49:33.534: INFO: namespace: e2e-tests-projected-vprw9, resource: bindings, ignored listing per whitelist Sep 9 18:49:33.562: INFO: namespace e2e-tests-projected-vprw9 deletion completed in 24.094655878s • [SLOW TEST:32.414 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:49:33.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-sgz96 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-sgz96;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-sgz96 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-sgz96;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-sgz96.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-sgz96.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-sgz96.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-sgz96.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-sgz96.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-sgz96.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-sgz96.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-sgz96.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-sgz96.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-sgz96.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-sgz96.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-sgz96.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-sgz96.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 246.54.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.54.246_udp@PTR;check="$$(dig +tcp +noall +answer +search 246.54.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.54.246_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-sgz96 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-sgz96;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-sgz96 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-sgz96;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-sgz96.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-sgz96.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-sgz96.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-sgz96.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-sgz96.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-sgz96.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-sgz96.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-sgz96.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-sgz96.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-sgz96.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-sgz96.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-sgz96.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-sgz96.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 246.54.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.54.246_udp@PTR;check="$$(dig +tcp +noall +answer +search 246.54.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.54.246_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 9 18:49:39.801: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:39.809: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-sgz96 from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:39.838: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:39.841: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:39.844: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-sgz96 from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:39.848: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-sgz96 from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:39.851: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-sgz96.svc from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:39.855: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-sgz96.svc from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:39.859: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-sgz96.svc from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:39.862: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-sgz96.svc from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:39.881: INFO: Lookups using e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-sgz96 jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-sgz96 jessie_tcp@dns-test-service.e2e-tests-dns-sgz96 jessie_udp@dns-test-service.e2e-tests-dns-sgz96.svc jessie_tcp@dns-test-service.e2e-tests-dns-sgz96.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-sgz96.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-sgz96.svc] Sep 9 18:49:44.886: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:44.897: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-sgz96 from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:44.930: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:44.933: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:44.936: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-sgz96 from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:44.939: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-sgz96 from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:44.942: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-sgz96.svc from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:44.944: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-sgz96.svc from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:44.947: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-sgz96.svc from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:44.951: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-sgz96.svc from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:44.974: INFO: Lookups using e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-sgz96 jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-sgz96 jessie_tcp@dns-test-service.e2e-tests-dns-sgz96 jessie_udp@dns-test-service.e2e-tests-dns-sgz96.svc jessie_tcp@dns-test-service.e2e-tests-dns-sgz96.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-sgz96.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-sgz96.svc] Sep 9 18:49:49.886: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:49.897: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-sgz96 from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:49.932: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:49.935: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:49.938: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-sgz96 from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:49.942: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-sgz96 from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:49.944: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-sgz96.svc from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:49.947: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-sgz96.svc from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:49.950: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-sgz96.svc from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:49.953: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-sgz96.svc from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:49.973: INFO: Lookups using e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-sgz96 jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-sgz96 jessie_tcp@dns-test-service.e2e-tests-dns-sgz96 jessie_udp@dns-test-service.e2e-tests-dns-sgz96.svc jessie_tcp@dns-test-service.e2e-tests-dns-sgz96.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-sgz96.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-sgz96.svc] Sep 9 18:49:54.886: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:54.897: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-sgz96 from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:54.933: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:54.936: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:54.939: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-sgz96 from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:54.941: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-sgz96 from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:54.945: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-sgz96.svc from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:54.947: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-sgz96.svc from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:54.950: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-sgz96.svc from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:54.953: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-sgz96.svc from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:54.973: INFO: Lookups using e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-sgz96 jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-sgz96 jessie_tcp@dns-test-service.e2e-tests-dns-sgz96 jessie_udp@dns-test-service.e2e-tests-dns-sgz96.svc jessie_tcp@dns-test-service.e2e-tests-dns-sgz96.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-sgz96.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-sgz96.svc] Sep 9 18:49:59.886: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:59.896: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-sgz96 from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:59.929: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:59.932: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:59.935: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-sgz96 from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:59.937: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-sgz96 from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:59.940: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-sgz96.svc from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:59.943: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-sgz96.svc from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:59.946: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-sgz96.svc from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:59.949: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-sgz96.svc from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:49:59.968: INFO: Lookups using e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-sgz96 jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-sgz96 jessie_tcp@dns-test-service.e2e-tests-dns-sgz96 jessie_udp@dns-test-service.e2e-tests-dns-sgz96.svc jessie_tcp@dns-test-service.e2e-tests-dns-sgz96.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-sgz96.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-sgz96.svc] Sep 9 18:50:04.885: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:50:04.895: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-sgz96 from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:50:04.929: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:50:04.932: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:50:04.935: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-sgz96 from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:50:04.938: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-sgz96 from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:50:04.940: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-sgz96.svc from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:50:04.943: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-sgz96.svc from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:50:04.945: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-sgz96.svc from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:50:04.948: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-sgz96.svc from pod e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007: the server could not find the requested resource (get pods dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007) Sep 9 18:50:04.964: INFO: Lookups using e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-sgz96 jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-sgz96 jessie_tcp@dns-test-service.e2e-tests-dns-sgz96 jessie_udp@dns-test-service.e2e-tests-dns-sgz96.svc jessie_tcp@dns-test-service.e2e-tests-dns-sgz96.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-sgz96.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-sgz96.svc] Sep 9 18:50:09.988: INFO: DNS probes using e2e-tests-dns-sgz96/dns-test-33a77c44-f2cd-11ea-88c2-0242ac110007 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:50:10.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-sgz96" for this suite. Sep 9 18:50:16.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:50:16.654: INFO: namespace: e2e-tests-dns-sgz96, resource: bindings, ignored listing per whitelist Sep 9 18:50:16.676: INFO: namespace e2e-tests-dns-sgz96 deletion completed in 6.106453376s • [SLOW TEST:43.113 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:50:16.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Sep 9 18:50:16.792: INFO: Waiting up to 5m0s for pod "pod-4d532c3e-f2cd-11ea-88c2-0242ac110007" in namespace "e2e-tests-emptydir-jrt55" to be "success or failure" Sep 9 18:50:16.882: INFO: Pod "pod-4d532c3e-f2cd-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 90.177014ms Sep 9 18:50:18.886: INFO: Pod "pod-4d532c3e-f2cd-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094010639s Sep 9 18:50:20.890: INFO: Pod "pod-4d532c3e-f2cd-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.098101025s STEP: Saw pod success Sep 9 18:50:20.890: INFO: Pod "pod-4d532c3e-f2cd-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 18:50:20.893: INFO: Trying to get logs from node hunter-worker2 pod pod-4d532c3e-f2cd-11ea-88c2-0242ac110007 container test-container: STEP: delete the pod Sep 9 18:50:20.911: INFO: Waiting for pod pod-4d532c3e-f2cd-11ea-88c2-0242ac110007 to disappear Sep 9 18:50:20.916: INFO: Pod pod-4d532c3e-f2cd-11ea-88c2-0242ac110007 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:50:20.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-jrt55" for this suite. Sep 9 18:50:26.931: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:50:26.948: INFO: namespace: e2e-tests-emptydir-jrt55, resource: bindings, ignored listing per whitelist Sep 9 18:50:27.011: INFO: namespace e2e-tests-emptydir-jrt55 deletion completed in 6.091426329s • [SLOW TEST:10.335 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:50:27.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-538177a2-f2cd-11ea-88c2-0242ac110007 STEP: Creating a pod to test consume secrets Sep 9 18:50:27.201: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-53858741-f2cd-11ea-88c2-0242ac110007" in namespace "e2e-tests-projected-tbg2b" to be "success or failure" Sep 9 18:50:27.235: INFO: Pod "pod-projected-secrets-53858741-f2cd-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 33.818728ms Sep 9 18:50:29.239: INFO: Pod "pod-projected-secrets-53858741-f2cd-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037915249s Sep 9 18:50:31.243: INFO: Pod "pod-projected-secrets-53858741-f2cd-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042231384s STEP: Saw pod success Sep 9 18:50:31.243: INFO: Pod "pod-projected-secrets-53858741-f2cd-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 18:50:31.249: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-53858741-f2cd-11ea-88c2-0242ac110007 container projected-secret-volume-test: STEP: delete the pod Sep 9 18:50:31.278: INFO: Waiting for pod pod-projected-secrets-53858741-f2cd-11ea-88c2-0242ac110007 to disappear Sep 9 18:50:31.300: INFO: Pod pod-projected-secrets-53858741-f2cd-11ea-88c2-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:50:31.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-tbg2b" for this suite. Sep 9 18:50:37.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:50:37.367: INFO: namespace: e2e-tests-projected-tbg2b, resource: bindings, ignored listing per whitelist Sep 9 18:50:37.391: INFO: namespace e2e-tests-projected-tbg2b deletion completed in 6.087228334s • [SLOW TEST:10.380 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:50:37.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Sep 9 18:50:37.547: INFO: Waiting up to 5m0s for pod "pod-59b2dc03-f2cd-11ea-88c2-0242ac110007" in namespace "e2e-tests-emptydir-pr5zp" to be "success or failure" Sep 9 18:50:37.551: INFO: Pod "pod-59b2dc03-f2cd-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 3.843248ms Sep 9 18:50:39.572: INFO: Pod "pod-59b2dc03-f2cd-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025502422s Sep 9 18:50:41.576: INFO: Pod "pod-59b2dc03-f2cd-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029605579s STEP: Saw pod success Sep 9 18:50:41.576: INFO: Pod "pod-59b2dc03-f2cd-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 18:50:41.579: INFO: Trying to get logs from node hunter-worker2 pod pod-59b2dc03-f2cd-11ea-88c2-0242ac110007 container test-container: STEP: delete the pod Sep 9 18:50:41.617: INFO: Waiting for pod pod-59b2dc03-f2cd-11ea-88c2-0242ac110007 to disappear Sep 9 18:50:41.660: INFO: Pod pod-59b2dc03-f2cd-11ea-88c2-0242ac110007 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:50:41.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-pr5zp" for this suite. Sep 9 18:50:47.692: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:50:47.719: INFO: namespace: e2e-tests-emptydir-pr5zp, resource: bindings, ignored listing per whitelist Sep 9 18:50:47.769: INFO: namespace e2e-tests-emptydir-pr5zp deletion completed in 6.104520958s • [SLOW TEST:10.378 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:50:47.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-n8hg6 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet Sep 9 18:50:47.934: INFO: Found 0 stateful pods, waiting for 3 Sep 9 18:50:57.939: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 9 18:50:57.939: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 9 18:50:57.939: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Sep 9 18:51:07.940: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 9 18:51:07.940: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 9 18:51:07.940: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Sep 9 18:51:07.966: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Sep 9 18:51:18.002: INFO: Updating stateful set ss2 Sep 9 18:51:18.010: INFO: Waiting for Pod e2e-tests-statefulset-n8hg6/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Sep 9 18:51:28.017: INFO: Waiting for Pod e2e-tests-statefulset-n8hg6/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Sep 9 18:51:38.575: INFO: Found 2 stateful pods, waiting for 3 Sep 9 18:51:48.580: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 9 18:51:48.580: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 9 18:51:48.580: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Sep 9 18:51:48.605: INFO: Updating stateful set ss2 Sep 9 18:51:48.634: INFO: Waiting for Pod e2e-tests-statefulset-n8hg6/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Sep 9 18:51:58.643: INFO: Waiting for Pod e2e-tests-statefulset-n8hg6/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Sep 9 18:52:08.660: INFO: Updating stateful set ss2 Sep 9 18:52:08.718: INFO: Waiting for StatefulSet e2e-tests-statefulset-n8hg6/ss2 to complete update Sep 9 18:52:08.718: INFO: Waiting for Pod e2e-tests-statefulset-n8hg6/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Sep 9 18:52:18.727: INFO: Deleting all statefulset in ns e2e-tests-statefulset-n8hg6 Sep 9 18:52:18.731: INFO: Scaling statefulset ss2 to 0 Sep 9 18:52:58.751: INFO: Waiting for statefulset status.replicas updated to 0 Sep 9 18:52:58.754: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:52:58.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-n8hg6" for this suite. Sep 9 18:53:04.821: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:53:04.865: INFO: namespace: e2e-tests-statefulset-n8hg6, resource: bindings, ignored listing per whitelist Sep 9 18:53:04.895: INFO: namespace e2e-tests-statefulset-n8hg6 deletion completed in 6.119351491s • [SLOW TEST:137.126 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:53:04.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Sep 9 18:53:05.027: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b196896e-f2cd-11ea-88c2-0242ac110007" in namespace "e2e-tests-projected-zjzdp" to be "success or failure" Sep 9 18:53:05.030: INFO: Pod "downwardapi-volume-b196896e-f2cd-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.433936ms Sep 9 18:53:07.033: INFO: Pod "downwardapi-volume-b196896e-f2cd-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005453959s Sep 9 18:53:09.036: INFO: Pod "downwardapi-volume-b196896e-f2cd-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008698162s STEP: Saw pod success Sep 9 18:53:09.036: INFO: Pod "downwardapi-volume-b196896e-f2cd-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 18:53:09.039: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-b196896e-f2cd-11ea-88c2-0242ac110007 container client-container: STEP: delete the pod Sep 9 18:53:09.103: INFO: Waiting for pod downwardapi-volume-b196896e-f2cd-11ea-88c2-0242ac110007 to disappear Sep 9 18:53:09.112: INFO: Pod downwardapi-volume-b196896e-f2cd-11ea-88c2-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:53:09.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-zjzdp" for this suite. Sep 9 18:53:15.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:53:15.196: INFO: namespace: e2e-tests-projected-zjzdp, resource: bindings, ignored listing per whitelist Sep 9 18:53:15.224: INFO: namespace e2e-tests-projected-zjzdp deletion completed in 6.108538744s • [SLOW TEST:10.328 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:53:15.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-xn7gp [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-xn7gp STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-xn7gp Sep 9 18:53:15.358: INFO: Found 0 stateful pods, waiting for 1 Sep 9 18:53:25.362: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Sep 9 18:53:25.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Sep 9 18:53:25.658: INFO: stderr: "I0909 18:53:25.477160 1402 log.go:172] (0xc000748420) (0xc000698640) Create stream\nI0909 18:53:25.477217 1402 log.go:172] (0xc000748420) (0xc000698640) Stream added, broadcasting: 1\nI0909 18:53:25.479934 1402 log.go:172] (0xc000748420) Reply frame received for 1\nI0909 18:53:25.480060 1402 log.go:172] (0xc000748420) (0xc0005aee60) Create stream\nI0909 18:53:25.480099 1402 log.go:172] (0xc000748420) (0xc0005aee60) Stream added, broadcasting: 3\nI0909 18:53:25.481142 1402 log.go:172] (0xc000748420) Reply frame received for 3\nI0909 18:53:25.481202 1402 log.go:172] (0xc000748420) (0xc000408000) Create stream\nI0909 18:53:25.481228 1402 log.go:172] (0xc000748420) (0xc000408000) Stream added, broadcasting: 5\nI0909 18:53:25.482273 1402 log.go:172] (0xc000748420) Reply frame received for 5\nI0909 18:53:25.651867 1402 log.go:172] (0xc000748420) Data frame received for 3\nI0909 18:53:25.651937 1402 log.go:172] (0xc0005aee60) (3) Data frame handling\nI0909 18:53:25.651958 1402 log.go:172] (0xc0005aee60) (3) Data frame sent\nI0909 18:53:25.651996 1402 log.go:172] (0xc000748420) Data frame received for 5\nI0909 18:53:25.652123 1402 log.go:172] (0xc000408000) (5) Data frame handling\nI0909 18:53:25.652186 1402 log.go:172] (0xc000748420) Data frame received for 3\nI0909 18:53:25.652207 1402 log.go:172] (0xc0005aee60) (3) Data frame handling\nI0909 18:53:25.654205 1402 log.go:172] (0xc000748420) Data frame received for 1\nI0909 18:53:25.654242 1402 log.go:172] (0xc000698640) (1) Data frame handling\nI0909 18:53:25.654270 1402 log.go:172] (0xc000698640) (1) Data frame sent\nI0909 18:53:25.654290 1402 log.go:172] (0xc000748420) (0xc000698640) Stream removed, broadcasting: 1\nI0909 18:53:25.654370 1402 log.go:172] (0xc000748420) Go away received\nI0909 18:53:25.654563 1402 log.go:172] (0xc000748420) (0xc000698640) Stream removed, broadcasting: 1\nI0909 18:53:25.654610 1402 log.go:172] (0xc000748420) (0xc0005aee60) Stream removed, broadcasting: 3\nI0909 18:53:25.654629 1402 log.go:172] (0xc000748420) (0xc000408000) Stream removed, broadcasting: 5\n" Sep 9 18:53:25.659: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Sep 9 18:53:25.659: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Sep 9 18:53:25.662: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Sep 9 18:53:35.667: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Sep 9 18:53:35.667: INFO: Waiting for statefulset status.replicas updated to 0 Sep 9 18:53:35.700: INFO: POD NODE PHASE GRACE CONDITIONS Sep 9 18:53:35.700: INFO: ss-0 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:15 +0000 UTC }] Sep 9 18:53:35.700: INFO: Sep 9 18:53:35.700: INFO: StatefulSet ss has not reached scale 3, at 1 Sep 9 18:53:36.705: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.977853968s Sep 9 18:53:37.708: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.972980281s Sep 9 18:53:38.713: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.969575444s Sep 9 18:53:39.717: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.964572126s Sep 9 18:53:40.722: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.960684174s Sep 9 18:53:41.727: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.955585387s Sep 9 18:53:42.733: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.950440243s Sep 9 18:53:43.738: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.945155134s Sep 9 18:53:44.743: INFO: Verifying statefulset ss doesn't scale past 3 for another 940.008168ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-xn7gp Sep 9 18:53:45.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 9 18:53:45.977: INFO: stderr: "I0909 18:53:45.888346 1424 log.go:172] (0xc000162790) (0xc000734640) Create stream\nI0909 18:53:45.888398 1424 log.go:172] (0xc000162790) (0xc000734640) Stream added, broadcasting: 1\nI0909 18:53:45.890667 1424 log.go:172] (0xc000162790) Reply frame received for 1\nI0909 18:53:45.890709 1424 log.go:172] (0xc000162790) (0xc0007346e0) Create stream\nI0909 18:53:45.890721 1424 log.go:172] (0xc000162790) (0xc0007346e0) Stream added, broadcasting: 3\nI0909 18:53:45.891809 1424 log.go:172] (0xc000162790) Reply frame received for 3\nI0909 18:53:45.891837 1424 log.go:172] (0xc000162790) (0xc0005e6c80) Create stream\nI0909 18:53:45.891846 1424 log.go:172] (0xc000162790) (0xc0005e6c80) Stream added, broadcasting: 5\nI0909 18:53:45.893084 1424 log.go:172] (0xc000162790) Reply frame received for 5\nI0909 18:53:45.971561 1424 log.go:172] (0xc000162790) Data frame received for 5\nI0909 18:53:45.971606 1424 log.go:172] (0xc0005e6c80) (5) Data frame handling\nI0909 18:53:45.971635 1424 log.go:172] (0xc000162790) Data frame received for 3\nI0909 18:53:45.971646 1424 log.go:172] (0xc0007346e0) (3) Data frame handling\nI0909 18:53:45.971660 1424 log.go:172] (0xc0007346e0) (3) Data frame sent\nI0909 18:53:45.971671 1424 log.go:172] (0xc000162790) Data frame received for 3\nI0909 18:53:45.971681 1424 log.go:172] (0xc0007346e0) (3) Data frame handling\nI0909 18:53:45.973490 1424 log.go:172] (0xc000162790) Data frame received for 1\nI0909 18:53:45.973605 1424 log.go:172] (0xc000734640) (1) Data frame handling\nI0909 18:53:45.973641 1424 log.go:172] (0xc000734640) (1) Data frame sent\nI0909 18:53:45.973666 1424 log.go:172] (0xc000162790) (0xc000734640) Stream removed, broadcasting: 1\nI0909 18:53:45.973790 1424 log.go:172] (0xc000162790) Go away received\nI0909 18:53:45.973935 1424 log.go:172] (0xc000162790) (0xc000734640) Stream removed, broadcasting: 1\nI0909 18:53:45.973956 1424 log.go:172] (0xc000162790) (0xc0007346e0) Stream removed, broadcasting: 3\nI0909 18:53:45.973973 1424 log.go:172] (0xc000162790) (0xc0005e6c80) Stream removed, broadcasting: 5\n" Sep 9 18:53:45.977: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Sep 9 18:53:45.977: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Sep 9 18:53:45.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 9 18:53:46.173: INFO: stderr: "I0909 18:53:46.099213 1446 log.go:172] (0xc0007c62c0) (0xc000734780) Create stream\nI0909 18:53:46.099268 1446 log.go:172] (0xc0007c62c0) (0xc000734780) Stream added, broadcasting: 1\nI0909 18:53:46.101856 1446 log.go:172] (0xc0007c62c0) Reply frame received for 1\nI0909 18:53:46.101898 1446 log.go:172] (0xc0007c62c0) (0xc0004fc8c0) Create stream\nI0909 18:53:46.101916 1446 log.go:172] (0xc0007c62c0) (0xc0004fc8c0) Stream added, broadcasting: 3\nI0909 18:53:46.102778 1446 log.go:172] (0xc0007c62c0) Reply frame received for 3\nI0909 18:53:46.102807 1446 log.go:172] (0xc0007c62c0) (0xc000734820) Create stream\nI0909 18:53:46.102813 1446 log.go:172] (0xc0007c62c0) (0xc000734820) Stream added, broadcasting: 5\nI0909 18:53:46.103759 1446 log.go:172] (0xc0007c62c0) Reply frame received for 5\nI0909 18:53:46.168216 1446 log.go:172] (0xc0007c62c0) Data frame received for 5\nI0909 18:53:46.168255 1446 log.go:172] (0xc000734820) (5) Data frame handling\nI0909 18:53:46.168270 1446 log.go:172] (0xc000734820) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0909 18:53:46.168290 1446 log.go:172] (0xc0007c62c0) Data frame received for 3\nI0909 18:53:46.168300 1446 log.go:172] (0xc0004fc8c0) (3) Data frame handling\nI0909 18:53:46.168310 1446 log.go:172] (0xc0004fc8c0) (3) Data frame sent\nI0909 18:53:46.168318 1446 log.go:172] (0xc0007c62c0) Data frame received for 3\nI0909 18:53:46.168328 1446 log.go:172] (0xc0004fc8c0) (3) Data frame handling\nI0909 18:53:46.168337 1446 log.go:172] (0xc0007c62c0) Data frame received for 5\nI0909 18:53:46.168349 1446 log.go:172] (0xc000734820) (5) Data frame handling\nI0909 18:53:46.169774 1446 log.go:172] (0xc0007c62c0) Data frame received for 1\nI0909 18:53:46.169792 1446 log.go:172] (0xc000734780) (1) Data frame handling\nI0909 18:53:46.169803 1446 log.go:172] (0xc000734780) (1) Data frame sent\nI0909 18:53:46.169822 1446 log.go:172] (0xc0007c62c0) (0xc000734780) Stream removed, broadcasting: 1\nI0909 18:53:46.169963 1446 log.go:172] (0xc0007c62c0) (0xc000734780) Stream removed, broadcasting: 1\nI0909 18:53:46.169978 1446 log.go:172] (0xc0007c62c0) (0xc0004fc8c0) Stream removed, broadcasting: 3\nI0909 18:53:46.170069 1446 log.go:172] (0xc0007c62c0) Go away received\nI0909 18:53:46.170119 1446 log.go:172] (0xc0007c62c0) (0xc000734820) Stream removed, broadcasting: 5\n" Sep 9 18:53:46.173: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Sep 9 18:53:46.173: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Sep 9 18:53:46.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 9 18:53:46.360: INFO: stderr: "I0909 18:53:46.292424 1468 log.go:172] (0xc000138790) (0xc000706640) Create stream\nI0909 18:53:46.292476 1468 log.go:172] (0xc000138790) (0xc000706640) Stream added, broadcasting: 1\nI0909 18:53:46.294795 1468 log.go:172] (0xc000138790) Reply frame received for 1\nI0909 18:53:46.294842 1468 log.go:172] (0xc000138790) (0xc0007a6f00) Create stream\nI0909 18:53:46.294857 1468 log.go:172] (0xc000138790) (0xc0007a6f00) Stream added, broadcasting: 3\nI0909 18:53:46.295570 1468 log.go:172] (0xc000138790) Reply frame received for 3\nI0909 18:53:46.295603 1468 log.go:172] (0xc000138790) (0xc0007066e0) Create stream\nI0909 18:53:46.295613 1468 log.go:172] (0xc000138790) (0xc0007066e0) Stream added, broadcasting: 5\nI0909 18:53:46.296686 1468 log.go:172] (0xc000138790) Reply frame received for 5\nI0909 18:53:46.355753 1468 log.go:172] (0xc000138790) Data frame received for 5\nI0909 18:53:46.355805 1468 log.go:172] (0xc0007066e0) (5) Data frame handling\nI0909 18:53:46.355825 1468 log.go:172] (0xc0007066e0) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0909 18:53:46.355850 1468 log.go:172] (0xc000138790) Data frame received for 3\nI0909 18:53:46.355869 1468 log.go:172] (0xc0007a6f00) (3) Data frame handling\nI0909 18:53:46.355894 1468 log.go:172] (0xc000138790) Data frame received for 5\nI0909 18:53:46.355907 1468 log.go:172] (0xc0007066e0) (5) Data frame handling\nI0909 18:53:46.355918 1468 log.go:172] (0xc0007a6f00) (3) Data frame sent\nI0909 18:53:46.355930 1468 log.go:172] (0xc000138790) Data frame received for 3\nI0909 18:53:46.355938 1468 log.go:172] (0xc0007a6f00) (3) Data frame handling\nI0909 18:53:46.357618 1468 log.go:172] (0xc000138790) Data frame received for 1\nI0909 18:53:46.357631 1468 log.go:172] (0xc000706640) (1) Data frame handling\nI0909 18:53:46.357638 1468 log.go:172] (0xc000706640) (1) Data frame sent\nI0909 18:53:46.357653 1468 log.go:172] (0xc000138790) (0xc000706640) Stream removed, broadcasting: 1\nI0909 18:53:46.357710 1468 log.go:172] (0xc000138790) Go away received\nI0909 18:53:46.357813 1468 log.go:172] (0xc000138790) (0xc000706640) Stream removed, broadcasting: 1\nI0909 18:53:46.357830 1468 log.go:172] (0xc000138790) (0xc0007a6f00) Stream removed, broadcasting: 3\nI0909 18:53:46.357836 1468 log.go:172] (0xc000138790) (0xc0007066e0) Stream removed, broadcasting: 5\n" Sep 9 18:53:46.361: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Sep 9 18:53:46.361: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Sep 9 18:53:46.364: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Sep 9 18:53:46.364: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Sep 9 18:53:46.364: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Sep 9 18:53:46.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Sep 9 18:53:46.559: INFO: stderr: "I0909 18:53:46.487657 1491 log.go:172] (0xc000138630) (0xc000728640) Create stream\nI0909 18:53:46.487710 1491 log.go:172] (0xc000138630) (0xc000728640) Stream added, broadcasting: 1\nI0909 18:53:46.490896 1491 log.go:172] (0xc000138630) Reply frame received for 1\nI0909 18:53:46.490944 1491 log.go:172] (0xc000138630) (0xc0007d6c80) Create stream\nI0909 18:53:46.490962 1491 log.go:172] (0xc000138630) (0xc0007d6c80) Stream added, broadcasting: 3\nI0909 18:53:46.492183 1491 log.go:172] (0xc000138630) Reply frame received for 3\nI0909 18:53:46.492233 1491 log.go:172] (0xc000138630) (0xc00058a000) Create stream\nI0909 18:53:46.492247 1491 log.go:172] (0xc000138630) (0xc00058a000) Stream added, broadcasting: 5\nI0909 18:53:46.493334 1491 log.go:172] (0xc000138630) Reply frame received for 5\nI0909 18:53:46.553742 1491 log.go:172] (0xc000138630) Data frame received for 5\nI0909 18:53:46.553794 1491 log.go:172] (0xc00058a000) (5) Data frame handling\nI0909 18:53:46.553822 1491 log.go:172] (0xc000138630) Data frame received for 3\nI0909 18:53:46.553832 1491 log.go:172] (0xc0007d6c80) (3) Data frame handling\nI0909 18:53:46.553846 1491 log.go:172] (0xc0007d6c80) (3) Data frame sent\nI0909 18:53:46.553862 1491 log.go:172] (0xc000138630) Data frame received for 3\nI0909 18:53:46.553875 1491 log.go:172] (0xc0007d6c80) (3) Data frame handling\nI0909 18:53:46.555298 1491 log.go:172] (0xc000138630) Data frame received for 1\nI0909 18:53:46.555332 1491 log.go:172] (0xc000728640) (1) Data frame handling\nI0909 18:53:46.555346 1491 log.go:172] (0xc000728640) (1) Data frame sent\nI0909 18:53:46.555360 1491 log.go:172] (0xc000138630) (0xc000728640) Stream removed, broadcasting: 1\nI0909 18:53:46.555575 1491 log.go:172] (0xc000138630) (0xc000728640) Stream removed, broadcasting: 1\nI0909 18:53:46.555595 1491 log.go:172] (0xc000138630) (0xc0007d6c80) Stream removed, broadcasting: 3\nI0909 18:53:46.555608 1491 log.go:172] (0xc000138630) (0xc00058a000) Stream removed, broadcasting: 5\n" Sep 9 18:53:46.559: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Sep 9 18:53:46.559: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Sep 9 18:53:46.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Sep 9 18:53:46.834: INFO: stderr: "I0909 18:53:46.702085 1512 log.go:172] (0xc000162840) (0xc0006872c0) Create stream\nI0909 18:53:46.702155 1512 log.go:172] (0xc000162840) (0xc0006872c0) Stream added, broadcasting: 1\nI0909 18:53:46.706995 1512 log.go:172] (0xc000162840) Reply frame received for 1\nI0909 18:53:46.707073 1512 log.go:172] (0xc000162840) (0xc00034a000) Create stream\nI0909 18:53:46.707101 1512 log.go:172] (0xc000162840) (0xc00034a000) Stream added, broadcasting: 3\nI0909 18:53:46.709722 1512 log.go:172] (0xc000162840) Reply frame received for 3\nI0909 18:53:46.709775 1512 log.go:172] (0xc000162840) (0xc00073c000) Create stream\nI0909 18:53:46.709793 1512 log.go:172] (0xc000162840) (0xc00073c000) Stream added, broadcasting: 5\nI0909 18:53:46.710824 1512 log.go:172] (0xc000162840) Reply frame received for 5\nI0909 18:53:46.828652 1512 log.go:172] (0xc000162840) Data frame received for 3\nI0909 18:53:46.828707 1512 log.go:172] (0xc00034a000) (3) Data frame handling\nI0909 18:53:46.828732 1512 log.go:172] (0xc00034a000) (3) Data frame sent\nI0909 18:53:46.828754 1512 log.go:172] (0xc000162840) Data frame received for 3\nI0909 18:53:46.828770 1512 log.go:172] (0xc00034a000) (3) Data frame handling\nI0909 18:53:46.829207 1512 log.go:172] (0xc000162840) Data frame received for 5\nI0909 18:53:46.829227 1512 log.go:172] (0xc00073c000) (5) Data frame handling\nI0909 18:53:46.830954 1512 log.go:172] (0xc000162840) Data frame received for 1\nI0909 18:53:46.830994 1512 log.go:172] (0xc0006872c0) (1) Data frame handling\nI0909 18:53:46.831022 1512 log.go:172] (0xc0006872c0) (1) Data frame sent\nI0909 18:53:46.831038 1512 log.go:172] (0xc000162840) (0xc0006872c0) Stream removed, broadcasting: 1\nI0909 18:53:46.831055 1512 log.go:172] (0xc000162840) Go away received\nI0909 18:53:46.831317 1512 log.go:172] (0xc000162840) (0xc0006872c0) Stream removed, broadcasting: 1\nI0909 18:53:46.831351 1512 log.go:172] (0xc000162840) (0xc00034a000) Stream removed, broadcasting: 3\nI0909 18:53:46.831366 1512 log.go:172] (0xc000162840) (0xc00073c000) Stream removed, broadcasting: 5\n" Sep 9 18:53:46.834: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Sep 9 18:53:46.834: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Sep 9 18:53:46.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Sep 9 18:53:47.056: INFO: stderr: "I0909 18:53:46.957951 1535 log.go:172] (0xc00014c840) (0xc000738640) Create stream\nI0909 18:53:46.958023 1535 log.go:172] (0xc00014c840) (0xc000738640) Stream added, broadcasting: 1\nI0909 18:53:46.961251 1535 log.go:172] (0xc00014c840) Reply frame received for 1\nI0909 18:53:46.961286 1535 log.go:172] (0xc00014c840) (0xc000546d20) Create stream\nI0909 18:53:46.961298 1535 log.go:172] (0xc00014c840) (0xc000546d20) Stream added, broadcasting: 3\nI0909 18:53:46.962186 1535 log.go:172] (0xc00014c840) Reply frame received for 3\nI0909 18:53:46.962239 1535 log.go:172] (0xc00014c840) (0xc000520000) Create stream\nI0909 18:53:46.962253 1535 log.go:172] (0xc00014c840) (0xc000520000) Stream added, broadcasting: 5\nI0909 18:53:46.963079 1535 log.go:172] (0xc00014c840) Reply frame received for 5\nI0909 18:53:47.044629 1535 log.go:172] (0xc00014c840) Data frame received for 5\nI0909 18:53:47.044660 1535 log.go:172] (0xc000520000) (5) Data frame handling\nI0909 18:53:47.044729 1535 log.go:172] (0xc00014c840) Data frame received for 3\nI0909 18:53:47.044777 1535 log.go:172] (0xc000546d20) (3) Data frame handling\nI0909 18:53:47.044813 1535 log.go:172] (0xc000546d20) (3) Data frame sent\nI0909 18:53:47.045103 1535 log.go:172] (0xc00014c840) Data frame received for 3\nI0909 18:53:47.045114 1535 log.go:172] (0xc000546d20) (3) Data frame handling\nI0909 18:53:47.052960 1535 log.go:172] (0xc00014c840) Data frame received for 1\nI0909 18:53:47.052992 1535 log.go:172] (0xc000738640) (1) Data frame handling\nI0909 18:53:47.053029 1535 log.go:172] (0xc000738640) (1) Data frame sent\nI0909 18:53:47.053045 1535 log.go:172] (0xc00014c840) (0xc000738640) Stream removed, broadcasting: 1\nI0909 18:53:47.053060 1535 log.go:172] (0xc00014c840) Go away received\nI0909 18:53:47.053385 1535 log.go:172] (0xc00014c840) (0xc000738640) Stream removed, broadcasting: 1\nI0909 18:53:47.053416 1535 log.go:172] (0xc00014c840) (0xc000546d20) Stream removed, broadcasting: 3\nI0909 18:53:47.053431 1535 log.go:172] (0xc00014c840) (0xc000520000) Stream removed, broadcasting: 5\n" Sep 9 18:53:47.056: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Sep 9 18:53:47.056: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Sep 9 18:53:47.056: INFO: Waiting for statefulset status.replicas updated to 0 Sep 9 18:53:47.059: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Sep 9 18:53:57.068: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Sep 9 18:53:57.068: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Sep 9 18:53:57.068: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Sep 9 18:53:57.083: INFO: POD NODE PHASE GRACE CONDITIONS Sep 9 18:53:57.083: INFO: ss-0 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:15 +0000 UTC }] Sep 9 18:53:57.083: INFO: ss-1 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:35 +0000 UTC }] Sep 9 18:53:57.083: INFO: ss-2 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:35 +0000 UTC }] Sep 9 18:53:57.083: INFO: Sep 9 18:53:57.083: INFO: StatefulSet ss has not reached scale 0, at 3 Sep 9 18:53:58.115: INFO: POD NODE PHASE GRACE CONDITIONS Sep 9 18:53:58.115: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:15 +0000 UTC }] Sep 9 18:53:58.115: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:35 +0000 UTC }] Sep 9 18:53:58.115: INFO: ss-2 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:35 +0000 UTC }] Sep 9 18:53:58.115: INFO: Sep 9 18:53:58.115: INFO: StatefulSet ss has not reached scale 0, at 3 Sep 9 18:53:59.120: INFO: POD NODE PHASE GRACE CONDITIONS Sep 9 18:53:59.120: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:15 +0000 UTC }] Sep 9 18:53:59.120: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:35 +0000 UTC }] Sep 9 18:53:59.120: INFO: ss-2 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:35 +0000 UTC }] Sep 9 18:53:59.120: INFO: Sep 9 18:53:59.120: INFO: StatefulSet ss has not reached scale 0, at 3 Sep 9 18:54:00.124: INFO: POD NODE PHASE GRACE CONDITIONS Sep 9 18:54:00.124: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:15 +0000 UTC }] Sep 9 18:54:00.124: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:35 +0000 UTC }] Sep 9 18:54:00.124: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:35 +0000 UTC }] Sep 9 18:54:00.124: INFO: Sep 9 18:54:00.124: INFO: StatefulSet ss has not reached scale 0, at 3 Sep 9 18:54:01.129: INFO: POD NODE PHASE GRACE CONDITIONS Sep 9 18:54:01.129: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:15 +0000 UTC }] Sep 9 18:54:01.129: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:35 +0000 UTC }] Sep 9 18:54:01.129: INFO: Sep 9 18:54:01.129: INFO: StatefulSet ss has not reached scale 0, at 2 Sep 9 18:54:02.151: INFO: POD NODE PHASE GRACE CONDITIONS Sep 9 18:54:02.151: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:15 +0000 UTC }] Sep 9 18:54:02.151: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:35 +0000 UTC }] Sep 9 18:54:02.151: INFO: Sep 9 18:54:02.151: INFO: StatefulSet ss has not reached scale 0, at 2 Sep 9 18:54:03.156: INFO: POD NODE PHASE GRACE CONDITIONS Sep 9 18:54:03.156: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:15 +0000 UTC }] Sep 9 18:54:03.156: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:35 +0000 UTC }] Sep 9 18:54:03.156: INFO: Sep 9 18:54:03.156: INFO: StatefulSet ss has not reached scale 0, at 2 Sep 9 18:54:04.161: INFO: POD NODE PHASE GRACE CONDITIONS Sep 9 18:54:04.161: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:15 +0000 UTC }] Sep 9 18:54:04.161: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:35 +0000 UTC }] Sep 9 18:54:04.161: INFO: Sep 9 18:54:04.161: INFO: StatefulSet ss has not reached scale 0, at 2 Sep 9 18:54:05.166: INFO: POD NODE PHASE GRACE CONDITIONS Sep 9 18:54:05.166: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:15 +0000 UTC }] Sep 9 18:54:05.166: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:35 +0000 UTC }] Sep 9 18:54:05.166: INFO: Sep 9 18:54:05.166: INFO: StatefulSet ss has not reached scale 0, at 2 Sep 9 18:54:06.170: INFO: POD NODE PHASE GRACE CONDITIONS Sep 9 18:54:06.170: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:15 +0000 UTC }] Sep 9 18:54:06.170: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:53:35 +0000 UTC }] Sep 9 18:54:06.170: INFO: Sep 9 18:54:06.170: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-xn7gp Sep 9 18:54:07.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 9 18:54:07.302: INFO: rc: 1 Sep 9 18:54:07.302: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc001d1dc20 exit status 1 true [0xc001766af8 0xc001766b10 0xc001766b28] [0xc001766af8 0xc001766b10 0xc001766b28] [0xc001766b08 0xc001766b20] [0x935700 0x935700] 0xc001432480 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Sep 9 18:54:17.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 9 18:54:17.407: INFO: rc: 1 Sep 9 18:54:17.407: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001d1dd40 exit status 1 true [0xc001766b30 0xc001766b48 0xc001766b60] [0xc001766b30 0xc001766b48 0xc001766b60] [0xc001766b40 0xc001766b58] [0x935700 0x935700] 0xc001432720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 9 18:54:27.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 9 18:54:27.489: INFO: rc: 1 Sep 9 18:54:27.489: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00086a450 exit status 1 true [0xc000436c20 0xc000436c40 0xc000436ca0] [0xc000436c20 0xc000436c40 0xc000436ca0] [0xc000436c38 0xc000436c80] [0x935700 0x935700] 0xc00166e480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 9 18:54:37.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 9 18:54:37.575: INFO: rc: 1 Sep 9 18:54:37.575: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00086a5a0 exit status 1 true [0xc000436cb0 0xc000436d00 0xc000436d48] [0xc000436cb0 0xc000436d00 0xc000436d48] [0xc000436cd0 0xc000436d28] [0x935700 0x935700] 0xc00166f560 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 9 18:54:47.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 9 18:54:47.665: INFO: rc: 1 Sep 9 18:54:47.665: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001d1de90 exit status 1 true [0xc001766b68 0xc001766b80 0xc001766b98] [0xc001766b68 0xc001766b80 0xc001766b98] [0xc001766b78 0xc001766b90] [0x935700 0x935700] 0xc001432a80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 9 18:54:57.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 9 18:54:57.788: INFO: rc: 1 Sep 9 18:54:57.788: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a3e120 exit status 1 true [0xc00000e308 0xc00000edd0 0xc00000ef08] [0xc00000e308 0xc00000edd0 0xc00000ef08] [0xc00000eda0 0xc00000eef0] [0x935700 0x935700] 0xc0028b41e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 9 18:55:07.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 9 18:55:07.873: INFO: rc: 1 Sep 9 18:55:07.873: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001df0150 exit status 1 true [0xc00016e000 0xc0017b0010 0xc0017b0030] [0xc00016e000 0xc0017b0010 0xc0017b0030] [0xc0017b0008 0xc0017b0028] [0x935700 0x935700] 0xc0024d21e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 9 18:55:17.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 9 18:55:17.966: INFO: rc: 1 Sep 9 18:55:17.966: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00243a150 exit status 1 true [0xc001f32000 0xc001f32018 0xc001f32030] [0xc001f32000 0xc001f32018 0xc001f32030] [0xc001f32010 0xc001f32028] [0x935700 0x935700] 0xc001d6cba0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 9 18:55:27.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 9 18:55:28.053: INFO: rc: 1 Sep 9 18:55:28.054: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a3e2a0 exit status 1 true [0xc00000ef18 0xc00000f010 0xc00000f130] [0xc00000ef18 0xc00000f010 0xc00000f130] [0xc00000eff8 0xc00000f0c8] [0x935700 0x935700] 0xc0028b4480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 9 18:55:38.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 9 18:55:38.134: INFO: rc: 1 Sep 9 18:55:38.134: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a3e3c0 exit status 1 true [0xc00000f1c8 0xc00000f290 0xc00000f3b0] [0xc00000f1c8 0xc00000f290 0xc00000f3b0] [0xc00000f208 0xc00000f378] [0x935700 0x935700] 0xc0028b4720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 9 18:55:48.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 9 18:55:48.211: INFO: rc: 1 Sep 9 18:55:48.211: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00215a120 exit status 1 true [0xc000d120f8 0xc000d12270 0xc000d122e8] [0xc000d120f8 0xc000d12270 0xc000d122e8] [0xc000d12138 0xc000d122c0] [0x935700 0x935700] 0xc001da0300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 9 18:55:58.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 9 18:55:58.300: INFO: rc: 1 Sep 9 18:55:58.300: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a3e4e0 exit status 1 true [0xc00000f3d0 0xc00000f510 0xc00000f5b8] [0xc00000f3d0 0xc00000f510 0xc00000f5b8] [0xc00000f4d0 0xc00000f588] [0x935700 0x935700] 0xc0028b49c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 9 18:56:08.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 9 18:56:08.387: INFO: rc: 1 Sep 9 18:56:08.387: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00243a4e0 exit status 1 true [0xc001f32038 0xc001f32050 0xc001f32068] [0xc001f32038 0xc001f32050 0xc001f32068] [0xc001f32048 0xc001f32060] [0x935700 0x935700] 0xc001d6cfc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 9 18:56:18.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 9 18:56:18.476: INFO: rc: 1 Sep 9 18:56:18.476: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a3e600 exit status 1 true [0xc00000f5c0 0xc00000f6a0 0xc00000f798] [0xc00000f5c0 0xc00000f6a0 0xc00000f798] [0xc00000f658 0xc00000f780] [0x935700 0x935700] 0xc0028b4c60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 9 18:56:28.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 9 18:56:28.557: INFO: rc: 1 Sep 9 18:56:28.557: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00215a240 exit status 1 true [0xc000d12358 0xc000d12440 0xc000d12528] [0xc000d12358 0xc000d12440 0xc000d12528] [0xc000d123f8 0xc000d124f8] [0x935700 0x935700] 0xc001da05a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 9 18:56:38.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 9 18:56:38.644: INFO: rc: 1 Sep 9 18:56:38.644: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001df0660 exit status 1 true [0xc0017b0038 0xc0017b0050 0xc0017b0068] [0xc0017b0038 0xc0017b0050 0xc0017b0068] [0xc0017b0048 0xc0017b0060] [0x935700 0x935700] 0xc0024d2780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 9 18:56:48.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 9 18:56:48.727: INFO: rc: 1 Sep 9 18:56:48.727: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a3e7b0 exit status 1 true [0xc00000f7c0 0xc00000f840 0xc00000f8b0] [0xc00000f7c0 0xc00000f840 0xc00000f8b0] [0xc00000f7f8 0xc00000f8a8] [0x935700 0x935700] 0xc0028b4f00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 9 18:56:58.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 9 18:56:58.812: INFO: rc: 1 Sep 9 18:56:58.812: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00243a180 exit status 1 true [0xc001f32000 0xc001f32018 0xc001f32030] [0xc001f32000 0xc001f32018 0xc001f32030] [0xc001f32010 0xc001f32028] [0x935700 0x935700] 0xc001d6cba0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 9 18:57:08.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 9 18:57:08.898: INFO: rc: 1 Sep 9 18:57:08.898: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001df04b0 exit status 1 true [0xc0017b0000 0xc0017b0018 0xc0017b0038] [0xc0017b0000 0xc0017b0018 0xc0017b0038] [0xc0017b0010 0xc0017b0030] [0x935700 0x935700] 0xc0024d21e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 9 18:57:18.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 9 18:57:19.005: INFO: rc: 1 Sep 9 18:57:19.005: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a3e150 exit status 1 true [0xc00000e100 0xc00000eda0 0xc00000eef0] [0xc00000e100 0xc00000eda0 0xc00000eef0] [0xc00000ed70 0xc00000ee60] [0x935700 0x935700] 0xc0028b41e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 9 18:57:29.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 9 18:57:29.086: INFO: rc: 1 Sep 9 18:57:29.087: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a3e270 exit status 1 true [0xc00000ef08 0xc00000eff8 0xc00000f0c8] [0xc00000ef08 0xc00000eff8 0xc00000f0c8] [0xc00000ef50 0xc00000f090] [0x935700 0x935700] 0xc0028b4480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 9 18:57:39.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 9 18:57:39.175: INFO: rc: 1 Sep 9 18:57:39.175: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00243a4b0 exit status 1 true [0xc001f32038 0xc001f32050 0xc001f32068] [0xc001f32038 0xc001f32050 0xc001f32068] [0xc001f32048 0xc001f32060] [0x935700 0x935700] 0xc001d6cfc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 9 18:57:49.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 9 18:57:49.262: INFO: rc: 1 Sep 9 18:57:49.262: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00243a6c0 exit status 1 true [0xc001f32070 0xc001f32088 0xc001f320a0] [0xc001f32070 0xc001f32088 0xc001f320a0] [0xc001f32080 0xc001f32098] [0x935700 0x935700] 0xc001d6d440 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 9 18:57:59.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 9 18:57:59.353: INFO: rc: 1 Sep 9 18:57:59.353: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001df06c0 exit status 1 true [0xc0017b0040 0xc0017b0058 0xc0017b0070] [0xc0017b0040 0xc0017b0058 0xc0017b0070] [0xc0017b0050 0xc0017b0068] [0x935700 0x935700] 0xc0024d2780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 9 18:58:09.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 9 18:58:09.441: INFO: rc: 1 Sep 9 18:58:09.441: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00215a150 exit status 1 true [0xc000d120f8 0xc000d12270 0xc000d122e8] [0xc000d120f8 0xc000d12270 0xc000d122e8] [0xc000d12138 0xc000d122c0] [0x935700 0x935700] 0xc001da0300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 9 18:58:19.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 9 18:58:19.534: INFO: rc: 1 Sep 9 18:58:19.534: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00215a2d0 exit status 1 true [0xc000d12358 0xc000d12440 0xc000d12528] [0xc000d12358 0xc000d12440 0xc000d12528] [0xc000d123f8 0xc000d124f8] [0x935700 0x935700] 0xc001da05a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 9 18:58:29.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 9 18:58:29.622: INFO: rc: 1 Sep 9 18:58:29.622: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00215a3f0 exit status 1 true [0xc000d12540 0xc000d125a8 0xc000d12638] [0xc000d12540 0xc000d125a8 0xc000d12638] [0xc000d12580 0xc000d125e8] [0x935700 0x935700] 0xc001da0840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 9 18:58:39.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 9 18:58:39.717: INFO: rc: 1 Sep 9 18:58:39.717: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00215a510 exit status 1 true [0xc000d12678 0xc000d126c0 0xc000d127b8] [0xc000d12678 0xc000d126c0 0xc000d127b8] [0xc000d126a8 0xc000d12780] [0x935700 0x935700] 0xc001da0ae0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 9 18:58:49.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 9 18:58:49.804: INFO: rc: 1 Sep 9 18:58:49.804: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00243aa50 exit status 1 true [0xc001f320a8 0xc001f320c0 0xc001f320d8] [0xc001f320a8 0xc001f320c0 0xc001f320d8] [0xc001f320b8 0xc001f320d0] [0x935700 0x935700] 0xc001d6d6e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 9 18:58:59.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 9 18:58:59.911: INFO: rc: 1 Sep 9 18:58:59.912: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00243a150 exit status 1 true [0xc001f32000 0xc001f32018 0xc001f32030] [0xc001f32000 0xc001f32018 0xc001f32030] [0xc001f32010 0xc001f32028] [0x935700 0x935700] 0xc001d6cba0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Sep 9 18:59:09.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-xn7gp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 9 18:59:09.995: INFO: rc: 1 Sep 9 18:59:09.996: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Sep 9 18:59:09.996: INFO: Scaling statefulset ss to 0 Sep 9 18:59:10.003: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Sep 9 18:59:10.005: INFO: Deleting all statefulset in ns e2e-tests-statefulset-xn7gp Sep 9 18:59:10.007: INFO: Scaling statefulset ss to 0 Sep 9 18:59:10.013: INFO: Waiting for statefulset status.replicas updated to 0 Sep 9 18:59:10.015: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:59:10.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-xn7gp" for this suite. Sep 9 18:59:16.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:59:16.067: INFO: namespace: e2e-tests-statefulset-xn7gp, resource: bindings, ignored listing per whitelist Sep 9 18:59:16.126: INFO: namespace e2e-tests-statefulset-xn7gp deletion completed in 6.094336283s • [SLOW TEST:360.902 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:59:16.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Sep 9 18:59:16.272: INFO: Pod name rollover-pod: Found 0 pods out of 1 Sep 9 18:59:21.276: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Sep 9 18:59:21.277: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Sep 9 18:59:23.281: INFO: Creating deployment "test-rollover-deployment" Sep 9 18:59:23.316: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Sep 9 18:59:25.323: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Sep 9 18:59:25.328: INFO: Ensure that both replica sets have 1 created replica Sep 9 18:59:25.334: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Sep 9 18:59:25.339: INFO: Updating deployment test-rollover-deployment Sep 9 18:59:25.339: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Sep 9 18:59:27.350: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Sep 9 18:59:27.357: INFO: Make sure deployment "test-rollover-deployment" is complete Sep 9 18:59:27.362: INFO: all replica sets need to contain the pod-template-hash label Sep 9 18:59:27.362: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735274763, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735274763, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735274765, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735274763, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 9 18:59:29.370: INFO: all replica sets need to contain the pod-template-hash label Sep 9 18:59:29.370: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735274763, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735274763, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735274769, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735274763, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 9 18:59:31.370: INFO: all replica sets need to contain the pod-template-hash label Sep 9 18:59:31.370: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735274763, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735274763, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735274769, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735274763, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 9 18:59:33.370: INFO: all replica sets need to contain the pod-template-hash label Sep 9 18:59:33.370: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735274763, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735274763, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735274769, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735274763, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 9 18:59:35.371: INFO: all replica sets need to contain the pod-template-hash label Sep 9 18:59:35.371: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735274763, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735274763, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735274769, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735274763, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 9 18:59:37.371: INFO: all replica sets need to contain the pod-template-hash label Sep 9 18:59:37.371: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735274763, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735274763, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735274769, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735274763, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 9 18:59:39.369: INFO: Sep 9 18:59:39.370: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Sep 9 18:59:39.408: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-2spsv,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2spsv/deployments/test-rollover-deployment,UID:9311cb2e-f2ce-11ea-b060-0242ac120006,ResourceVersion:739761,Generation:2,CreationTimestamp:2020-09-09 18:59:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-09-09 18:59:23 +0000 UTC 2020-09-09 18:59:23 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-09-09 18:59:39 +0000 UTC 2020-09-09 18:59:23 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Sep 9 18:59:39.412: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-2spsv,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2spsv/replicasets/test-rollover-deployment-5b8479fdb6,UID:944bee05-f2ce-11ea-b060-0242ac120006,ResourceVersion:739752,Generation:2,CreationTimestamp:2020-09-09 18:59:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 9311cb2e-f2ce-11ea-b060-0242ac120006 0xc000e4efc7 0xc000e4efc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Sep 9 18:59:39.412: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Sep 9 18:59:39.412: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-2spsv,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2spsv/replicasets/test-rollover-controller,UID:8edf75f6-f2ce-11ea-b060-0242ac120006,ResourceVersion:739760,Generation:2,CreationTimestamp:2020-09-09 18:59:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 9311cb2e-f2ce-11ea-b060-0242ac120006 0xc000967a27 0xc000967a28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Sep 9 18:59:39.412: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-2spsv,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2spsv/replicasets/test-rollover-deployment-58494b7559,UID:931847a3-f2ce-11ea-b060-0242ac120006,ResourceVersion:739714,Generation:2,CreationTimestamp:2020-09-09 18:59:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 9311cb2e-f2ce-11ea-b060-0242ac120006 0xc000751ba7 0xc000751ba8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Sep 9 18:59:39.416: INFO: Pod "test-rollover-deployment-5b8479fdb6-gwjh8" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-gwjh8,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-2spsv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2spsv/pods/test-rollover-deployment-5b8479fdb6-gwjh8,UID:945a4845-f2ce-11ea-b060-0242ac120006,ResourceVersion:739730,Generation:0,CreationTimestamp:2020-09-09 18:59:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 944bee05-f2ce-11ea-b060-0242ac120006 0xc000d42337 0xc000d42338}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gtqx9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gtqx9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-gtqx9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000d423b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000d423d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:59:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:59:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:59:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 18:59:25 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.61,StartTime:2020-09-09 18:59:25 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-09-09 18:59:28 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://3dd90484e3749168194807b7a76e708377bef8767d490bab332768c8ed775e2f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 18:59:39.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-2spsv" for this suite. Sep 9 18:59:45.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 18:59:45.734: INFO: namespace: e2e-tests-deployment-2spsv, resource: bindings, ignored listing per whitelist Sep 9 18:59:45.798: INFO: namespace e2e-tests-deployment-2spsv deletion completed in 6.377137836s • [SLOW TEST:29.671 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 18:59:45.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Sep 9 18:59:53.944: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 9 18:59:53.949: INFO: Pod pod-with-prestop-exec-hook still exists Sep 9 18:59:55.949: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 9 18:59:55.953: INFO: Pod pod-with-prestop-exec-hook still exists Sep 9 18:59:57.949: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 9 18:59:57.954: INFO: Pod pod-with-prestop-exec-hook still exists Sep 9 18:59:59.949: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 9 18:59:59.954: INFO: Pod pod-with-prestop-exec-hook still exists Sep 9 19:00:01.949: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 9 19:00:01.954: INFO: Pod pod-with-prestop-exec-hook still exists Sep 9 19:00:03.949: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 9 19:00:03.953: INFO: Pod pod-with-prestop-exec-hook still exists Sep 9 19:00:05.949: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 9 19:00:05.954: INFO: Pod pod-with-prestop-exec-hook still exists Sep 9 19:00:07.950: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 9 19:00:07.954: INFO: Pod pod-with-prestop-exec-hook still exists Sep 9 19:00:09.949: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 9 19:00:09.954: INFO: Pod pod-with-prestop-exec-hook still exists Sep 9 19:00:11.949: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 9 19:00:11.959: INFO: Pod pod-with-prestop-exec-hook still exists Sep 9 19:00:13.949: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 9 19:00:13.971: INFO: Pod pod-with-prestop-exec-hook still exists Sep 9 19:00:15.950: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 9 19:00:15.954: INFO: Pod pod-with-prestop-exec-hook still exists Sep 9 19:00:17.949: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 9 19:00:17.959: INFO: Pod pod-with-prestop-exec-hook still exists Sep 9 19:00:19.950: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 9 19:00:19.953: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 19:00:19.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-s82m5" for this suite. Sep 9 19:00:41.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 19:00:41.988: INFO: namespace: e2e-tests-container-lifecycle-hook-s82m5, resource: bindings, ignored listing per whitelist Sep 9 19:00:42.084: INFO: namespace e2e-tests-container-lifecycle-hook-s82m5 deletion completed in 22.118958291s • [SLOW TEST:56.286 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 19:00:42.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-gzhm5 Sep 9 19:00:46.316: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-gzhm5 STEP: checking the pod's current state and verifying that restartCount is present Sep 9 19:00:46.319: INFO: Initial restart count of pod liveness-http is 0 Sep 9 19:01:06.388: INFO: Restart count of pod e2e-tests-container-probe-gzhm5/liveness-http is now 1 (20.068440686s elapsed) Sep 9 19:01:28.433: INFO: Restart count of pod e2e-tests-container-probe-gzhm5/liveness-http is now 2 (42.113184579s elapsed) Sep 9 19:01:48.484: INFO: Restart count of pod e2e-tests-container-probe-gzhm5/liveness-http is now 3 (1m2.164737623s elapsed) Sep 9 19:02:06.530: INFO: Restart count of pod e2e-tests-container-probe-gzhm5/liveness-http is now 4 (1m20.210916674s elapsed) Sep 9 19:02:28.582: INFO: Restart count of pod e2e-tests-container-probe-gzhm5/liveness-http is now 5 (1m42.262464439s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 19:02:28.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-gzhm5" for this suite. Sep 9 19:02:34.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 19:02:34.666: INFO: namespace: e2e-tests-container-probe-gzhm5, resource: bindings, ignored listing per whitelist Sep 9 19:02:34.710: INFO: namespace e2e-tests-container-probe-gzhm5 deletion completed in 6.09240463s • [SLOW TEST:112.625 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 19:02:34.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Sep 9 19:02:34.875: INFO: Waiting up to 5m0s for pod "pod-05412ba7-f2cf-11ea-88c2-0242ac110007" in namespace "e2e-tests-emptydir-jchzv" to be "success or failure" Sep 9 19:02:34.880: INFO: Pod "pod-05412ba7-f2cf-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.415182ms Sep 9 19:02:36.884: INFO: Pod "pod-05412ba7-f2cf-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008439485s Sep 9 19:02:38.888: INFO: Pod "pod-05412ba7-f2cf-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012246134s STEP: Saw pod success Sep 9 19:02:38.888: INFO: Pod "pod-05412ba7-f2cf-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 19:02:38.890: INFO: Trying to get logs from node hunter-worker pod pod-05412ba7-f2cf-11ea-88c2-0242ac110007 container test-container: STEP: delete the pod Sep 9 19:02:38.924: INFO: Waiting for pod pod-05412ba7-f2cf-11ea-88c2-0242ac110007 to disappear Sep 9 19:02:38.941: INFO: Pod pod-05412ba7-f2cf-11ea-88c2-0242ac110007 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 19:02:38.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-jchzv" for this suite. Sep 9 19:02:44.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 19:02:45.017: INFO: namespace: e2e-tests-emptydir-jchzv, resource: bindings, ignored listing per whitelist Sep 9 19:02:45.032: INFO: namespace e2e-tests-emptydir-jchzv deletion completed in 6.07920986s • [SLOW TEST:10.322 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 19:02:45.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0909 19:02:46.204782 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Sep 9 19:02:46.204: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 19:02:46.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-mvqcl" for this suite. Sep 9 19:02:52.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 19:02:52.351: INFO: namespace: e2e-tests-gc-mvqcl, resource: bindings, ignored listing per whitelist Sep 9 19:02:52.359: INFO: namespace e2e-tests-gc-mvqcl deletion completed in 6.15125794s • [SLOW TEST:7.326 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 19:02:52.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-nvv8x [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet Sep 9 19:02:52.473: INFO: Found 0 stateful pods, waiting for 3 Sep 9 19:03:02.478: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 9 19:03:02.478: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 9 19:03:02.478: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Sep 9 19:03:12.477: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 9 19:03:12.477: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 9 19:03:12.477: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Sep 9 19:03:12.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nvv8x ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Sep 9 19:03:12.722: INFO: stderr: "I0909 19:03:12.623195 2220 log.go:172] (0xc0002c6420) (0xc000716640) Create stream\nI0909 19:03:12.623271 2220 log.go:172] (0xc0002c6420) (0xc000716640) Stream added, broadcasting: 1\nI0909 19:03:12.626605 2220 log.go:172] (0xc0002c6420) Reply frame received for 1\nI0909 19:03:12.626644 2220 log.go:172] (0xc0002c6420) (0xc0005d4dc0) Create stream\nI0909 19:03:12.626662 2220 log.go:172] (0xc0002c6420) (0xc0005d4dc0) Stream added, broadcasting: 3\nI0909 19:03:12.627435 2220 log.go:172] (0xc0002c6420) Reply frame received for 3\nI0909 19:03:12.627481 2220 log.go:172] (0xc0002c6420) (0xc0007166e0) Create stream\nI0909 19:03:12.627503 2220 log.go:172] (0xc0002c6420) (0xc0007166e0) Stream added, broadcasting: 5\nI0909 19:03:12.628254 2220 log.go:172] (0xc0002c6420) Reply frame received for 5\nI0909 19:03:12.715291 2220 log.go:172] (0xc0002c6420) Data frame received for 3\nI0909 19:03:12.715316 2220 log.go:172] (0xc0005d4dc0) (3) Data frame handling\nI0909 19:03:12.715329 2220 log.go:172] (0xc0005d4dc0) (3) Data frame sent\nI0909 19:03:12.715335 2220 log.go:172] (0xc0002c6420) Data frame received for 3\nI0909 19:03:12.715341 2220 log.go:172] (0xc0005d4dc0) (3) Data frame handling\nI0909 19:03:12.715898 2220 log.go:172] (0xc0002c6420) Data frame received for 5\nI0909 19:03:12.715929 2220 log.go:172] (0xc0007166e0) (5) Data frame handling\nI0909 19:03:12.717797 2220 log.go:172] (0xc0002c6420) Data frame received for 1\nI0909 19:03:12.717814 2220 log.go:172] (0xc000716640) (1) Data frame handling\nI0909 19:03:12.717827 2220 log.go:172] (0xc000716640) (1) Data frame sent\nI0909 19:03:12.717857 2220 log.go:172] (0xc0002c6420) (0xc000716640) Stream removed, broadcasting: 1\nI0909 19:03:12.717941 2220 log.go:172] (0xc0002c6420) Go away received\nI0909 19:03:12.718012 2220 log.go:172] (0xc0002c6420) (0xc000716640) Stream removed, broadcasting: 1\nI0909 19:03:12.718043 2220 log.go:172] (0xc0002c6420) (0xc0005d4dc0) Stream removed, broadcasting: 3\nI0909 19:03:12.718103 2220 log.go:172] (0xc0002c6420) (0xc0007166e0) Stream removed, broadcasting: 5\n" Sep 9 19:03:12.723: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Sep 9 19:03:12.723: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Sep 9 19:03:22.756: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Sep 9 19:03:32.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nvv8x ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 9 19:03:32.981: INFO: stderr: "I0909 19:03:32.902990 2242 log.go:172] (0xc000138630) (0xc00066b4a0) Create stream\nI0909 19:03:32.903052 2242 log.go:172] (0xc000138630) (0xc00066b4a0) Stream added, broadcasting: 1\nI0909 19:03:32.905319 2242 log.go:172] (0xc000138630) Reply frame received for 1\nI0909 19:03:32.905372 2242 log.go:172] (0xc000138630) (0xc0006ce000) Create stream\nI0909 19:03:32.905389 2242 log.go:172] (0xc000138630) (0xc0006ce000) Stream added, broadcasting: 3\nI0909 19:03:32.906059 2242 log.go:172] (0xc000138630) Reply frame received for 3\nI0909 19:03:32.906092 2242 log.go:172] (0xc000138630) (0xc00066b540) Create stream\nI0909 19:03:32.906101 2242 log.go:172] (0xc000138630) (0xc00066b540) Stream added, broadcasting: 5\nI0909 19:03:32.907043 2242 log.go:172] (0xc000138630) Reply frame received for 5\nI0909 19:03:32.975501 2242 log.go:172] (0xc000138630) Data frame received for 5\nI0909 19:03:32.975522 2242 log.go:172] (0xc00066b540) (5) Data frame handling\nI0909 19:03:32.975544 2242 log.go:172] (0xc000138630) Data frame received for 3\nI0909 19:03:32.975568 2242 log.go:172] (0xc0006ce000) (3) Data frame handling\nI0909 19:03:32.975590 2242 log.go:172] (0xc0006ce000) (3) Data frame sent\nI0909 19:03:32.975609 2242 log.go:172] (0xc000138630) Data frame received for 3\nI0909 19:03:32.975624 2242 log.go:172] (0xc0006ce000) (3) Data frame handling\nI0909 19:03:32.977290 2242 log.go:172] (0xc000138630) Data frame received for 1\nI0909 19:03:32.977307 2242 log.go:172] (0xc00066b4a0) (1) Data frame handling\nI0909 19:03:32.977319 2242 log.go:172] (0xc00066b4a0) (1) Data frame sent\nI0909 19:03:32.977335 2242 log.go:172] (0xc000138630) (0xc00066b4a0) Stream removed, broadcasting: 1\nI0909 19:03:32.977348 2242 log.go:172] (0xc000138630) Go away received\nI0909 19:03:32.977485 2242 log.go:172] (0xc000138630) (0xc00066b4a0) Stream removed, broadcasting: 1\nI0909 19:03:32.977505 2242 log.go:172] (0xc000138630) (0xc0006ce000) Stream removed, broadcasting: 3\nI0909 19:03:32.977515 2242 log.go:172] (0xc000138630) (0xc00066b540) Stream removed, broadcasting: 5\n" Sep 9 19:03:32.981: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Sep 9 19:03:32.981: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Sep 9 19:03:53.002: INFO: Waiting for StatefulSet e2e-tests-statefulset-nvv8x/ss2 to complete update Sep 9 19:03:53.002: INFO: Waiting for Pod e2e-tests-statefulset-nvv8x/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision Sep 9 19:04:03.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nvv8x ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Sep 9 19:04:03.289: INFO: stderr: "I0909 19:04:03.145974 2264 log.go:172] (0xc000138630) (0xc0006ad2c0) Create stream\nI0909 19:04:03.146046 2264 log.go:172] (0xc000138630) (0xc0006ad2c0) Stream added, broadcasting: 1\nI0909 19:04:03.148709 2264 log.go:172] (0xc000138630) Reply frame received for 1\nI0909 19:04:03.148768 2264 log.go:172] (0xc000138630) (0xc0006ad360) Create stream\nI0909 19:04:03.148783 2264 log.go:172] (0xc000138630) (0xc0006ad360) Stream added, broadcasting: 3\nI0909 19:04:03.149758 2264 log.go:172] (0xc000138630) Reply frame received for 3\nI0909 19:04:03.149803 2264 log.go:172] (0xc000138630) (0xc0006ad400) Create stream\nI0909 19:04:03.149819 2264 log.go:172] (0xc000138630) (0xc0006ad400) Stream added, broadcasting: 5\nI0909 19:04:03.150676 2264 log.go:172] (0xc000138630) Reply frame received for 5\nI0909 19:04:03.283571 2264 log.go:172] (0xc000138630) Data frame received for 3\nI0909 19:04:03.283635 2264 log.go:172] (0xc0006ad360) (3) Data frame handling\nI0909 19:04:03.283681 2264 log.go:172] (0xc0006ad360) (3) Data frame sent\nI0909 19:04:03.283709 2264 log.go:172] (0xc000138630) Data frame received for 3\nI0909 19:04:03.283722 2264 log.go:172] (0xc0006ad360) (3) Data frame handling\nI0909 19:04:03.283905 2264 log.go:172] (0xc000138630) Data frame received for 5\nI0909 19:04:03.283942 2264 log.go:172] (0xc0006ad400) (5) Data frame handling\nI0909 19:04:03.285675 2264 log.go:172] (0xc000138630) Data frame received for 1\nI0909 19:04:03.285699 2264 log.go:172] (0xc0006ad2c0) (1) Data frame handling\nI0909 19:04:03.285717 2264 log.go:172] (0xc0006ad2c0) (1) Data frame sent\nI0909 19:04:03.285730 2264 log.go:172] (0xc000138630) (0xc0006ad2c0) Stream removed, broadcasting: 1\nI0909 19:04:03.285904 2264 log.go:172] (0xc000138630) Go away received\nI0909 19:04:03.285959 2264 log.go:172] (0xc000138630) (0xc0006ad2c0) Stream removed, broadcasting: 1\nI0909 19:04:03.286005 2264 log.go:172] (0xc000138630) (0xc0006ad360) Stream removed, broadcasting: 3\nI0909 19:04:03.286023 2264 log.go:172] (0xc000138630) (0xc0006ad400) Stream removed, broadcasting: 5\n" Sep 9 19:04:03.289: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Sep 9 19:04:03.290: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Sep 9 19:04:13.323: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Sep 9 19:04:23.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nvv8x ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Sep 9 19:04:23.577: INFO: stderr: "I0909 19:04:23.510927 2287 log.go:172] (0xc0007b6210) (0xc00071a640) Create stream\nI0909 19:04:23.510988 2287 log.go:172] (0xc0007b6210) (0xc00071a640) Stream added, broadcasting: 1\nI0909 19:04:23.512866 2287 log.go:172] (0xc0007b6210) Reply frame received for 1\nI0909 19:04:23.512900 2287 log.go:172] (0xc0007b6210) (0xc00071a6e0) Create stream\nI0909 19:04:23.512911 2287 log.go:172] (0xc0007b6210) (0xc00071a6e0) Stream added, broadcasting: 3\nI0909 19:04:23.513907 2287 log.go:172] (0xc0007b6210) Reply frame received for 3\nI0909 19:04:23.513959 2287 log.go:172] (0xc0007b6210) (0xc000682dc0) Create stream\nI0909 19:04:23.514019 2287 log.go:172] (0xc0007b6210) (0xc000682dc0) Stream added, broadcasting: 5\nI0909 19:04:23.514889 2287 log.go:172] (0xc0007b6210) Reply frame received for 5\nI0909 19:04:23.571456 2287 log.go:172] (0xc0007b6210) Data frame received for 5\nI0909 19:04:23.571488 2287 log.go:172] (0xc000682dc0) (5) Data frame handling\nI0909 19:04:23.571538 2287 log.go:172] (0xc0007b6210) Data frame received for 3\nI0909 19:04:23.571569 2287 log.go:172] (0xc00071a6e0) (3) Data frame handling\nI0909 19:04:23.571588 2287 log.go:172] (0xc00071a6e0) (3) Data frame sent\nI0909 19:04:23.571604 2287 log.go:172] (0xc0007b6210) Data frame received for 3\nI0909 19:04:23.571619 2287 log.go:172] (0xc00071a6e0) (3) Data frame handling\nI0909 19:04:23.572827 2287 log.go:172] (0xc0007b6210) Data frame received for 1\nI0909 19:04:23.572855 2287 log.go:172] (0xc00071a640) (1) Data frame handling\nI0909 19:04:23.572890 2287 log.go:172] (0xc00071a640) (1) Data frame sent\nI0909 19:04:23.572933 2287 log.go:172] (0xc0007b6210) (0xc00071a640) Stream removed, broadcasting: 1\nI0909 19:04:23.573053 2287 log.go:172] (0xc0007b6210) Go away received\nI0909 19:04:23.573267 2287 log.go:172] (0xc0007b6210) (0xc00071a640) Stream removed, broadcasting: 1\nI0909 19:04:23.573298 2287 log.go:172] (0xc0007b6210) (0xc00071a6e0) Stream removed, broadcasting: 3\nI0909 19:04:23.573312 2287 log.go:172] (0xc0007b6210) (0xc000682dc0) Stream removed, broadcasting: 5\n" Sep 9 19:04:23.577: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Sep 9 19:04:23.577: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Sep 9 19:04:33.600: INFO: Waiting for StatefulSet e2e-tests-statefulset-nvv8x/ss2 to complete update Sep 9 19:04:33.600: INFO: Waiting for Pod e2e-tests-statefulset-nvv8x/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Sep 9 19:04:33.600: INFO: Waiting for Pod e2e-tests-statefulset-nvv8x/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Sep 9 19:04:43.611: INFO: Waiting for StatefulSet e2e-tests-statefulset-nvv8x/ss2 to complete update Sep 9 19:04:43.611: INFO: Waiting for Pod e2e-tests-statefulset-nvv8x/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Sep 9 19:04:53.609: INFO: Deleting all statefulset in ns e2e-tests-statefulset-nvv8x Sep 9 19:04:53.612: INFO: Scaling statefulset ss2 to 0 Sep 9 19:05:23.629: INFO: Waiting for statefulset status.replicas updated to 0 Sep 9 19:05:23.632: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 19:05:23.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-nvv8x" for this suite. Sep 9 19:05:31.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 19:05:31.685: INFO: namespace: e2e-tests-statefulset-nvv8x, resource: bindings, ignored listing per whitelist Sep 9 19:05:31.746: INFO: namespace e2e-tests-statefulset-nvv8x deletion completed in 8.095937683s • [SLOW TEST:159.386 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 19:05:31.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Sep 9 19:05:31.873: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6ec36553-f2cf-11ea-88c2-0242ac110007" in namespace "e2e-tests-downward-api-jd5b4" to be "success or failure" Sep 9 19:05:31.905: INFO: Pod "downwardapi-volume-6ec36553-f2cf-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 30.984491ms Sep 9 19:05:33.909: INFO: Pod "downwardapi-volume-6ec36553-f2cf-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035136176s Sep 9 19:05:35.913: INFO: Pod "downwardapi-volume-6ec36553-f2cf-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039639256s STEP: Saw pod success Sep 9 19:05:35.913: INFO: Pod "downwardapi-volume-6ec36553-f2cf-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 19:05:35.919: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-6ec36553-f2cf-11ea-88c2-0242ac110007 container client-container: STEP: delete the pod Sep 9 19:05:36.139: INFO: Waiting for pod downwardapi-volume-6ec36553-f2cf-11ea-88c2-0242ac110007 to disappear Sep 9 19:05:36.261: INFO: Pod downwardapi-volume-6ec36553-f2cf-11ea-88c2-0242ac110007 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 19:05:36.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-jd5b4" for this suite. Sep 9 19:05:42.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 19:05:42.303: INFO: namespace: e2e-tests-downward-api-jd5b4, resource: bindings, ignored listing per whitelist Sep 9 19:05:42.363: INFO: namespace e2e-tests-downward-api-jd5b4 deletion completed in 6.098797149s • [SLOW TEST:10.617 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 19:05:42.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Sep 9 19:05:42.425: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 9 19:05:42.431: INFO: Waiting for terminating namespaces to be deleted... Sep 9 19:05:42.433: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Sep 9 19:05:42.437: INFO: kube-proxy-t9g4m from kube-system started at 2020-09-05 13:37:20 +0000 UTC (1 container statuses recorded) Sep 9 19:05:42.437: INFO: Container kube-proxy ready: true, restart count 0 Sep 9 19:05:42.437: INFO: kindnet-4qkqp from kube-system started at 2020-09-05 13:37:22 +0000 UTC (1 container statuses recorded) Sep 9 19:05:42.437: INFO: Container kindnet-cni ready: true, restart count 0 Sep 9 19:05:42.437: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Sep 9 19:05:42.443: INFO: kindnet-z7tw7 from kube-system started at 2020-09-05 13:37:22 +0000 UTC (1 container statuses recorded) Sep 9 19:05:42.443: INFO: Container kindnet-cni ready: true, restart count 0 Sep 9 19:05:42.443: INFO: kube-proxy-vl5mq from kube-system started at 2020-09-05 13:37:20 +0000 UTC (1 container statuses recorded) Sep 9 19:05:42.443: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-worker STEP: verifying the node has the label node hunter-worker2 Sep 9 19:05:42.534: INFO: Pod kindnet-4qkqp requesting resource cpu=100m on Node hunter-worker Sep 9 19:05:42.535: INFO: Pod kindnet-z7tw7 requesting resource cpu=100m on Node hunter-worker2 Sep 9 19:05:42.535: INFO: Pod kube-proxy-t9g4m requesting resource cpu=0m on Node hunter-worker Sep 9 19:05:42.535: INFO: Pod kube-proxy-vl5mq requesting resource cpu=0m on Node hunter-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-751f2ed5-f2cf-11ea-88c2-0242ac110007.163332fa2283020e], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-8xp2x/filler-pod-751f2ed5-f2cf-11ea-88c2-0242ac110007 to hunter-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-751f2ed5-f2cf-11ea-88c2-0242ac110007.163332faaf2be5c3], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-751f2ed5-f2cf-11ea-88c2-0242ac110007.163332fae2db20e6], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-751f2ed5-f2cf-11ea-88c2-0242ac110007.163332faf18884e3], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Normal], Name = [filler-pod-75202221-f2cf-11ea-88c2-0242ac110007.163332fa22d99117], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-8xp2x/filler-pod-75202221-f2cf-11ea-88c2-0242ac110007 to hunter-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-75202221-f2cf-11ea-88c2-0242ac110007.163332fa6dc1fb03], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-75202221-f2cf-11ea-88c2-0242ac110007.163332fab6d3afb7], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-75202221-f2cf-11ea-88c2-0242ac110007.163332facd092060], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.163332fb124ce8f4], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node hunter-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node hunter-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 19:05:47.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-8xp2x" for this suite. Sep 9 19:05:53.726: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 19:05:53.742: INFO: namespace: e2e-tests-sched-pred-8xp2x, resource: bindings, ignored listing per whitelist Sep 9 19:05:53.794: INFO: namespace e2e-tests-sched-pred-8xp2x deletion completed in 6.076407568s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:11.430 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 19:05:53.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Sep 9 19:05:53.982: INFO: Waiting up to 5m0s for pod "downward-api-7bef019d-f2cf-11ea-88c2-0242ac110007" in namespace "e2e-tests-downward-api-7drsx" to be "success or failure" Sep 9 19:05:53.997: INFO: Pod "downward-api-7bef019d-f2cf-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 14.902273ms Sep 9 19:05:56.000: INFO: Pod "downward-api-7bef019d-f2cf-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018462047s Sep 9 19:05:58.005: INFO: Pod "downward-api-7bef019d-f2cf-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022796075s STEP: Saw pod success Sep 9 19:05:58.005: INFO: Pod "downward-api-7bef019d-f2cf-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 19:05:58.007: INFO: Trying to get logs from node hunter-worker2 pod downward-api-7bef019d-f2cf-11ea-88c2-0242ac110007 container dapi-container: STEP: delete the pod Sep 9 19:05:58.059: INFO: Waiting for pod downward-api-7bef019d-f2cf-11ea-88c2-0242ac110007 to disappear Sep 9 19:05:58.087: INFO: Pod downward-api-7bef019d-f2cf-11ea-88c2-0242ac110007 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 19:05:58.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-7drsx" for this suite. Sep 9 19:06:04.102: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 19:06:04.141: INFO: namespace: e2e-tests-downward-api-7drsx, resource: bindings, ignored listing per whitelist Sep 9 19:06:04.181: INFO: namespace e2e-tests-downward-api-7drsx deletion completed in 6.090162658s • [SLOW TEST:10.387 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 19:06:04.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Sep 9 19:06:04.358: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 9 19:06:04.360: INFO: Number of nodes with available pods: 0 Sep 9 19:06:04.360: INFO: Node hunter-worker is running more than one daemon pod Sep 9 19:06:05.404: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 9 19:06:05.406: INFO: Number of nodes with available pods: 0 Sep 9 19:06:05.406: INFO: Node hunter-worker is running more than one daemon pod Sep 9 19:06:06.366: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 9 19:06:06.369: INFO: Number of nodes with available pods: 0 Sep 9 19:06:06.369: INFO: Node hunter-worker is running more than one daemon pod Sep 9 19:06:07.482: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 9 19:06:07.486: INFO: Number of nodes with available pods: 0 Sep 9 19:06:07.486: INFO: Node hunter-worker is running more than one daemon pod Sep 9 19:06:08.366: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 9 19:06:08.370: INFO: Number of nodes with available pods: 1 Sep 9 19:06:08.370: INFO: Node hunter-worker2 is running more than one daemon pod Sep 9 19:06:09.370: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 9 19:06:09.373: INFO: Number of nodes with available pods: 2 Sep 9 19:06:09.374: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Sep 9 19:06:09.405: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 9 19:06:09.418: INFO: Number of nodes with available pods: 2 Sep 9 19:06:09.418: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-r2cdr, will wait for the garbage collector to delete the pods Sep 9 19:06:10.539: INFO: Deleting DaemonSet.extensions daemon-set took: 24.07513ms Sep 9 19:06:10.639: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.279396ms Sep 9 19:06:20.143: INFO: Number of nodes with available pods: 0 Sep 9 19:06:20.143: INFO: Number of running nodes: 0, number of available pods: 0 Sep 9 19:06:20.145: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-r2cdr/daemonsets","resourceVersion":"741217"},"items":null} Sep 9 19:06:20.148: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-r2cdr/pods","resourceVersion":"741217"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 19:06:20.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-r2cdr" for this suite. Sep 9 19:06:26.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 19:06:26.235: INFO: namespace: e2e-tests-daemonsets-r2cdr, resource: bindings, ignored listing per whitelist Sep 9 19:06:26.262: INFO: namespace e2e-tests-daemonsets-r2cdr deletion completed in 6.10124886s • [SLOW TEST:22.081 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 19:06:26.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Sep 9 19:06:26.443: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Sep 9 19:06:26.453: INFO: Pod name sample-pod: Found 0 pods out of 1 Sep 9 19:06:31.457: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Sep 9 19:06:31.458: INFO: Creating deployment "test-rolling-update-deployment" Sep 9 19:06:31.462: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Sep 9 19:06:31.468: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Sep 9 19:06:33.477: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Sep 9 19:06:33.479: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735275191, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735275191, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735275191, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735275191, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 9 19:06:35.483: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Sep 9 19:06:35.492: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-hbddm,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-hbddm/deployments/test-rolling-update-deployment,UID:92486cf2-f2cf-11ea-b060-0242ac120006,ResourceVersion:741314,Generation:1,CreationTimestamp:2020-09-09 19:06:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-09-09 19:06:31 +0000 UTC 2020-09-09 19:06:31 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-09-09 19:06:34 +0000 UTC 2020-09-09 19:06:31 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Sep 9 19:06:35.494: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-hbddm,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-hbddm/replicasets/test-rolling-update-deployment-75db98fb4c,UID:924abe5a-f2cf-11ea-b060-0242ac120006,ResourceVersion:741305,Generation:1,CreationTimestamp:2020-09-09 19:06:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 92486cf2-f2cf-11ea-b060-0242ac120006 0xc001e27a57 0xc001e27a58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Sep 9 19:06:35.494: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Sep 9 19:06:35.494: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-hbddm,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-hbddm/replicasets/test-rolling-update-controller,UID:8f4b3cd8-f2cf-11ea-b060-0242ac120006,ResourceVersion:741313,Generation:2,CreationTimestamp:2020-09-09 19:06:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 92486cf2-f2cf-11ea-b060-0242ac120006 0xc001e27827 0xc001e27828}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Sep 9 19:06:35.496: INFO: Pod "test-rolling-update-deployment-75db98fb4c-wrbbr" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-wrbbr,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-hbddm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbddm/pods/test-rolling-update-deployment-75db98fb4c-wrbbr,UID:924dd7df-f2cf-11ea-b060-0242ac120006,ResourceVersion:741304,Generation:0,CreationTimestamp:2020-09-09 19:06:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 924abe5a-f2cf-11ea-b060-0242ac120006 0xc001512647 0xc001512648}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4bskl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bskl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-4bskl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001512700} {node.kubernetes.io/unreachable Exists NoExecute 0xc001512720}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:06:31 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:06:34 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:06:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:06:31 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.2.43,StartTime:2020-09-09 19:06:31 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-09-09 19:06:34 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://15d2ed531af179bdb1bce30efc5fc202d78c1b5f16718d107e640adc4a4bab39}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 19:06:35.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-hbddm" for this suite. Sep 9 19:06:43.529: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 19:06:43.552: INFO: namespace: e2e-tests-deployment-hbddm, resource: bindings, ignored listing per whitelist Sep 9 19:06:43.606: INFO: namespace e2e-tests-deployment-hbddm deletion completed in 8.107468369s • [SLOW TEST:17.344 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 19:06:43.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Sep 9 19:06:43.731: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9997e7d1-f2cf-11ea-88c2-0242ac110007" in namespace "e2e-tests-downward-api-hrrnm" to be "success or failure" Sep 9 19:06:43.735: INFO: Pod "downwardapi-volume-9997e7d1-f2cf-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 3.991332ms Sep 9 19:06:45.739: INFO: Pod "downwardapi-volume-9997e7d1-f2cf-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008052212s Sep 9 19:06:47.743: INFO: Pod "downwardapi-volume-9997e7d1-f2cf-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012231519s STEP: Saw pod success Sep 9 19:06:47.743: INFO: Pod "downwardapi-volume-9997e7d1-f2cf-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 19:06:47.746: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-9997e7d1-f2cf-11ea-88c2-0242ac110007 container client-container: STEP: delete the pod Sep 9 19:06:47.765: INFO: Waiting for pod downwardapi-volume-9997e7d1-f2cf-11ea-88c2-0242ac110007 to disappear Sep 9 19:06:47.770: INFO: Pod downwardapi-volume-9997e7d1-f2cf-11ea-88c2-0242ac110007 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 19:06:47.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-hrrnm" for this suite. Sep 9 19:06:53.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 19:06:53.827: INFO: namespace: e2e-tests-downward-api-hrrnm, resource: bindings, ignored listing per whitelist Sep 9 19:06:53.857: INFO: namespace e2e-tests-downward-api-hrrnm deletion completed in 6.083534272s • [SLOW TEST:10.250 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 19:06:53.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Sep 9 19:07:01.015: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 19:07:02.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-jfq2w" for this suite. Sep 9 19:07:36.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 19:07:36.070: INFO: namespace: e2e-tests-replicaset-jfq2w, resource: bindings, ignored listing per whitelist Sep 9 19:07:36.131: INFO: namespace e2e-tests-replicaset-jfq2w deletion completed in 34.093097826s • [SLOW TEST:42.274 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 19:07:36.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0909 19:08:06.791724 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Sep 9 19:08:06.791: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 19:08:06.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-qmknq" for this suite. Sep 9 19:08:16.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 19:08:16.829: INFO: namespace: e2e-tests-gc-qmknq, resource: bindings, ignored listing per whitelist Sep 9 19:08:16.886: INFO: namespace e2e-tests-gc-qmknq deletion completed in 10.091060915s • [SLOW TEST:40.754 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 19:08:16.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Sep 9 19:08:26.524: INFO: 9 pods remaining Sep 9 19:08:26.524: INFO: 0 pods has nil DeletionTimestamp Sep 9 19:08:26.524: INFO: Sep 9 19:08:27.936: INFO: 0 pods remaining Sep 9 19:08:27.936: INFO: 0 pods has nil DeletionTimestamp Sep 9 19:08:27.936: INFO: STEP: Gathering metrics W0909 19:08:31.274505 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Sep 9 19:08:31.274: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 19:08:31.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-tl26t" for this suite. Sep 9 19:08:38.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 19:08:38.361: INFO: namespace: e2e-tests-gc-tl26t, resource: bindings, ignored listing per whitelist Sep 9 19:08:38.415: INFO: namespace e2e-tests-gc-tl26t deletion completed in 6.76873545s • [SLOW TEST:21.530 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 19:08:38.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Sep 9 19:08:38.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-fxf65' Sep 9 19:08:45.016: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Sep 9 19:08:45.016: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Sep 9 19:08:47.764: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-rd9sp] Sep 9 19:08:47.764: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-rd9sp" in namespace "e2e-tests-kubectl-fxf65" to be "running and ready" Sep 9 19:08:48.110: INFO: Pod "e2e-test-nginx-rc-rd9sp": Phase="Pending", Reason="", readiness=false. Elapsed: 346.063405ms Sep 9 19:08:50.137: INFO: Pod "e2e-test-nginx-rc-rd9sp": Phase="Running", Reason="", readiness=true. Elapsed: 2.37227553s Sep 9 19:08:50.137: INFO: Pod "e2e-test-nginx-rc-rd9sp" satisfied condition "running and ready" Sep 9 19:08:50.137: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-rd9sp] Sep 9 19:08:50.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-fxf65' Sep 9 19:08:50.247: INFO: stderr: "" Sep 9 19:08:50.247: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 Sep 9 19:08:50.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-fxf65' Sep 9 19:08:50.386: INFO: stderr: "" Sep 9 19:08:50.386: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 19:08:50.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-fxf65" for this suite. Sep 9 19:08:56.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 19:08:56.467: INFO: namespace: e2e-tests-kubectl-fxf65, resource: bindings, ignored listing per whitelist Sep 9 19:08:56.467: INFO: namespace e2e-tests-kubectl-fxf65 deletion completed in 6.069080408s • [SLOW TEST:18.051 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 19:08:56.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-e8c2de0b-f2cf-11ea-88c2-0242ac110007 STEP: Creating a pod to test consume secrets Sep 9 19:08:56.613: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e8c8f971-f2cf-11ea-88c2-0242ac110007" in namespace "e2e-tests-projected-ctlm5" to be "success or failure" Sep 9 19:08:56.617: INFO: Pod "pod-projected-secrets-e8c8f971-f2cf-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 3.772904ms Sep 9 19:08:58.685: INFO: Pod "pod-projected-secrets-e8c8f971-f2cf-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071807144s Sep 9 19:09:00.688: INFO: Pod "pod-projected-secrets-e8c8f971-f2cf-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.074971752s STEP: Saw pod success Sep 9 19:09:00.688: INFO: Pod "pod-projected-secrets-e8c8f971-f2cf-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 19:09:00.690: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-e8c8f971-f2cf-11ea-88c2-0242ac110007 container projected-secret-volume-test: STEP: delete the pod Sep 9 19:09:00.708: INFO: Waiting for pod pod-projected-secrets-e8c8f971-f2cf-11ea-88c2-0242ac110007 to disappear Sep 9 19:09:00.713: INFO: Pod pod-projected-secrets-e8c8f971-f2cf-11ea-88c2-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 19:09:00.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-ctlm5" for this suite. Sep 9 19:09:06.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 19:09:06.755: INFO: namespace: e2e-tests-projected-ctlm5, resource: bindings, ignored listing per whitelist Sep 9 19:09:06.801: INFO: namespace e2e-tests-projected-ctlm5 deletion completed in 6.085970321s • [SLOW TEST:10.334 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 19:09:06.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Sep 9 19:09:06.892: INFO: Waiting up to 5m0s for pod "downward-api-eeec9ff4-f2cf-11ea-88c2-0242ac110007" in namespace "e2e-tests-downward-api-5mfdl" to be "success or failure" Sep 9 19:09:06.907: INFO: Pod "downward-api-eeec9ff4-f2cf-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 15.204149ms Sep 9 19:09:08.911: INFO: Pod "downward-api-eeec9ff4-f2cf-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019313016s Sep 9 19:09:10.915: INFO: Pod "downward-api-eeec9ff4-f2cf-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023039568s STEP: Saw pod success Sep 9 19:09:10.915: INFO: Pod "downward-api-eeec9ff4-f2cf-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 19:09:10.918: INFO: Trying to get logs from node hunter-worker2 pod downward-api-eeec9ff4-f2cf-11ea-88c2-0242ac110007 container dapi-container: STEP: delete the pod Sep 9 19:09:10.949: INFO: Waiting for pod downward-api-eeec9ff4-f2cf-11ea-88c2-0242ac110007 to disappear Sep 9 19:09:10.961: INFO: Pod downward-api-eeec9ff4-f2cf-11ea-88c2-0242ac110007 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 19:09:10.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-5mfdl" for this suite. Sep 9 19:09:16.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 19:09:17.005: INFO: namespace: e2e-tests-downward-api-5mfdl, resource: bindings, ignored listing per whitelist Sep 9 19:09:17.059: INFO: namespace e2e-tests-downward-api-5mfdl deletion completed in 6.09468731s • [SLOW TEST:10.258 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 19:09:17.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Sep 9 19:09:17.141: INFO: Creating deployment "nginx-deployment" Sep 9 19:09:17.159: INFO: Waiting for observed generation 1 Sep 9 19:09:19.230: INFO: Waiting for all required pods to come up Sep 9 19:09:19.234: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Sep 9 19:09:29.305: INFO: Waiting for deployment "nginx-deployment" to complete Sep 9 19:09:29.320: INFO: Updating deployment "nginx-deployment" with a non-existent image Sep 9 19:09:29.325: INFO: Updating deployment nginx-deployment Sep 9 19:09:29.325: INFO: Waiting for observed generation 2 Sep 9 19:09:31.941: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Sep 9 19:09:32.290: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Sep 9 19:09:32.479: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Sep 9 19:09:32.487: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Sep 9 19:09:32.488: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Sep 9 19:09:32.490: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Sep 9 19:09:32.493: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Sep 9 19:09:32.493: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Sep 9 19:09:32.502: INFO: Updating deployment nginx-deployment Sep 9 19:09:32.502: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Sep 9 19:09:32.578: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Sep 9 19:09:34.757: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Sep 9 19:09:34.763: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-fslnm,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-fslnm/deployments/nginx-deployment,UID:f509ae8c-f2cf-11ea-b060-0242ac120006,ResourceVersion:742318,Generation:3,CreationTimestamp:2020-09-09 19:09:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-09-09 19:09:32 +0000 UTC 2020-09-09 19:09:32 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-09-09 19:09:33 +0000 UTC 2020-09-09 19:09:17 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},} Sep 9 19:09:34.766: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-fslnm,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-fslnm/replicasets/nginx-deployment-5c98f8fb5,UID:fc4cf5c1-f2cf-11ea-b060-0242ac120006,ResourceVersion:742309,Generation:3,CreationTimestamp:2020-09-09 19:09:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment f509ae8c-f2cf-11ea-b060-0242ac120006 0xc002aa2dd7 0xc002aa2dd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Sep 9 19:09:34.766: INFO: All old ReplicaSets of Deployment "nginx-deployment": Sep 9 19:09:34.766: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-fslnm,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-fslnm/replicasets/nginx-deployment-85ddf47c5d,UID:f50d6c1a-f2cf-11ea-b060-0242ac120006,ResourceVersion:742306,Generation:3,CreationTimestamp:2020-09-09 19:09:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment f509ae8c-f2cf-11ea-b060-0242ac120006 0xc002aa2e97 0xc002aa2e98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Sep 9 19:09:34.772: INFO: Pod "nginx-deployment-5c98f8fb5-2fmc4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-2fmc4,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-fslnm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fslnm/pods/nginx-deployment-5c98f8fb5-2fmc4,UID:fc4d8c0b-f2cf-11ea-b060-0242ac120006,ResourceVersion:742204,Generation:0,CreationTimestamp:2020-09-09 19:09:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fc4cf5c1-f2cf-11ea-b060-0242ac120006 0xc0026ce6d7 0xc0026ce6d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r9msj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r9msj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-r9msj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026ce750} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026ce770}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:29 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-09-09 19:09:29 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 9 19:09:34.772: INFO: Pod "nginx-deployment-5c98f8fb5-44jdt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-44jdt,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-fslnm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fslnm/pods/nginx-deployment-5c98f8fb5-44jdt,UID:fe487f22-f2cf-11ea-b060-0242ac120006,ResourceVersion:742288,Generation:0,CreationTimestamp:2020-09-09 19:09:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fc4cf5c1-f2cf-11ea-b060-0242ac120006 0xc0026ce830 0xc0026ce831}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r9msj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r9msj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-r9msj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026ce8b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026ce8d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:32 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 9 19:09:34.772: INFO: Pod "nginx-deployment-5c98f8fb5-52jqm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-52jqm,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-fslnm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fslnm/pods/nginx-deployment-5c98f8fb5-52jqm,UID:fe43b7ef-f2cf-11ea-b060-0242ac120006,ResourceVersion:742316,Generation:0,CreationTimestamp:2020-09-09 19:09:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fc4cf5c1-f2cf-11ea-b060-0242ac120006 0xc0026ce940 0xc0026ce941}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r9msj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r9msj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-r9msj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026ce9c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026ce9e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:32 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-09-09 19:09:32 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 9 19:09:34.772: INFO: Pod "nginx-deployment-5c98f8fb5-6ldv5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-6ldv5,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-fslnm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fslnm/pods/nginx-deployment-5c98f8fb5-6ldv5,UID:fe43cb92-f2cf-11ea-b060-0242ac120006,ResourceVersion:742277,Generation:0,CreationTimestamp:2020-09-09 19:09:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fc4cf5c1-f2cf-11ea-b060-0242ac120006 0xc0026ceaa0 0xc0026ceaa1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r9msj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r9msj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-r9msj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026ceb30} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026ceb50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:32 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 9 19:09:34.773: INFO: Pod "nginx-deployment-5c98f8fb5-b6dbf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-b6dbf,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-fslnm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fslnm/pods/nginx-deployment-5c98f8fb5-b6dbf,UID:fc69baa7-f2cf-11ea-b060-0242ac120006,ResourceVersion:742227,Generation:0,CreationTimestamp:2020-09-09 19:09:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fc4cf5c1-f2cf-11ea-b060-0242ac120006 0xc0026cebc0 0xc0026cebc1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r9msj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r9msj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-r9msj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026cec40} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026cec60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:29 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-09-09 19:09:30 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 9 19:09:34.773: INFO: Pod "nginx-deployment-5c98f8fb5-bdhnp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-bdhnp,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-fslnm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fslnm/pods/nginx-deployment-5c98f8fb5-bdhnp,UID:fe488511-f2cf-11ea-b060-0242ac120006,ResourceVersion:742291,Generation:0,CreationTimestamp:2020-09-09 19:09:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fc4cf5c1-f2cf-11ea-b060-0242ac120006 0xc0026ced20 0xc0026ced21}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r9msj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r9msj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-r9msj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026ceda0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026cedc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:32 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 9 19:09:34.773: INFO: Pod "nginx-deployment-5c98f8fb5-f6wwn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-f6wwn,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-fslnm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fslnm/pods/nginx-deployment-5c98f8fb5-f6wwn,UID:fe486fbe-f2cf-11ea-b060-0242ac120006,ResourceVersion:742289,Generation:0,CreationTimestamp:2020-09-09 19:09:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fc4cf5c1-f2cf-11ea-b060-0242ac120006 0xc0026cee30 0xc0026cee31}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r9msj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r9msj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-r9msj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026ceeb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026ceed0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:32 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 9 19:09:34.773: INFO: Pod "nginx-deployment-5c98f8fb5-gl74z" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-gl74z,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-fslnm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fslnm/pods/nginx-deployment-5c98f8fb5-gl74z,UID:fe4888f9-f2cf-11ea-b060-0242ac120006,ResourceVersion:742290,Generation:0,CreationTimestamp:2020-09-09 19:09:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fc4cf5c1-f2cf-11ea-b060-0242ac120006 0xc0026cef40 0xc0026cef41}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r9msj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r9msj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-r9msj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026cefc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026cefe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:32 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 9 19:09:34.773: INFO: Pod "nginx-deployment-5c98f8fb5-kqftl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-kqftl,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-fslnm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fslnm/pods/nginx-deployment-5c98f8fb5-kqftl,UID:fe4f3654-f2cf-11ea-b060-0242ac120006,ResourceVersion:742295,Generation:0,CreationTimestamp:2020-09-09 19:09:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fc4cf5c1-f2cf-11ea-b060-0242ac120006 0xc0026cf050 0xc0026cf051}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r9msj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r9msj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-r9msj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026cf0d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026cf0f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:32 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 9 19:09:34.773: INFO: Pod "nginx-deployment-5c98f8fb5-n78d6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-n78d6,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-fslnm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fslnm/pods/nginx-deployment-5c98f8fb5-n78d6,UID:fc4e8bd7-f2cf-11ea-b060-0242ac120006,ResourceVersion:742203,Generation:0,CreationTimestamp:2020-09-09 19:09:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fc4cf5c1-f2cf-11ea-b060-0242ac120006 0xc0026cf160 0xc0026cf161}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r9msj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r9msj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-r9msj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026cf1e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026cf200}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:29 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-09-09 19:09:29 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 9 19:09:34.773: INFO: Pod "nginx-deployment-5c98f8fb5-nnxkk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-nnxkk,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-fslnm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fslnm/pods/nginx-deployment-5c98f8fb5-nnxkk,UID:fc83935f-f2cf-11ea-b060-0242ac120006,ResourceVersion:742232,Generation:0,CreationTimestamp:2020-09-09 19:09:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fc4cf5c1-f2cf-11ea-b060-0242ac120006 0xc0026cf2c0 0xc0026cf2c1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r9msj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r9msj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-r9msj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026cf340} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026cf360}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:29 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-09-09 19:09:30 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 9 19:09:34.773: INFO: Pod "nginx-deployment-5c98f8fb5-qwcv8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-qwcv8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-fslnm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fslnm/pods/nginx-deployment-5c98f8fb5-qwcv8,UID:fe4199d2-f2cf-11ea-b060-0242ac120006,ResourceVersion:742308,Generation:0,CreationTimestamp:2020-09-09 19:09:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fc4cf5c1-f2cf-11ea-b060-0242ac120006 0xc0026cf420 0xc0026cf421}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r9msj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r9msj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-r9msj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026cf4a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026cf4c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:32 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-09-09 19:09:32 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 9 19:09:34.774: INFO: Pod "nginx-deployment-5c98f8fb5-s29qn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-s29qn,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-fslnm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fslnm/pods/nginx-deployment-5c98f8fb5-s29qn,UID:fc4ebaee-f2cf-11ea-b060-0242ac120006,ResourceVersion:742221,Generation:0,CreationTimestamp:2020-09-09 19:09:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fc4cf5c1-f2cf-11ea-b060-0242ac120006 0xc0026cf580 0xc0026cf581}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r9msj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r9msj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-r9msj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026cf600} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026cf620}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:29 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-09-09 19:09:29 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 9 19:09:34.774: INFO: Pod "nginx-deployment-85ddf47c5d-5k2pg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5k2pg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-fslnm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fslnm/pods/nginx-deployment-85ddf47c5d-5k2pg,UID:f51d7f98-f2cf-11ea-b060-0242ac120006,ResourceVersion:742132,Generation:0,CreationTimestamp:2020-09-09 19:09:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f50d6c1a-f2cf-11ea-b060-0242ac120006 0xc0026cf6e0 0xc0026cf6e1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r9msj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r9msj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r9msj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026cf750} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026cf770}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:17 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:17 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.82,StartTime:2020-09-09 19:09:17 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-09-09 19:09:24 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://cfe9ecc2dae0a788c1bf3952948cd22c3c2f2191d89ab43fe00fd91cb5d72ec8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 9 19:09:34.774: INFO: Pod "nginx-deployment-85ddf47c5d-7rtvb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7rtvb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-fslnm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fslnm/pods/nginx-deployment-85ddf47c5d-7rtvb,UID:fe4856ff-f2cf-11ea-b060-0242ac120006,ResourceVersion:742287,Generation:0,CreationTimestamp:2020-09-09 19:09:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f50d6c1a-f2cf-11ea-b060-0242ac120006 0xc0026cf830 0xc0026cf831}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r9msj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r9msj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r9msj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026cf8a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026cf8c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:32 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 9 19:09:34.774: INFO: Pod "nginx-deployment-85ddf47c5d-8hs8f" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-8hs8f,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-fslnm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fslnm/pods/nginx-deployment-85ddf47c5d-8hs8f,UID:fe4843e9-f2cf-11ea-b060-0242ac120006,ResourceVersion:742282,Generation:0,CreationTimestamp:2020-09-09 19:09:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f50d6c1a-f2cf-11ea-b060-0242ac120006 0xc0026cf930 0xc0026cf931}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r9msj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r9msj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r9msj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026cf9a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026cf9c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:32 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 9 19:09:34.774: INFO: Pod "nginx-deployment-85ddf47c5d-8rjx6" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-8rjx6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-fslnm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fslnm/pods/nginx-deployment-85ddf47c5d-8rjx6,UID:f5204e49-f2cf-11ea-b060-0242ac120006,ResourceVersion:742139,Generation:0,CreationTimestamp:2020-09-09 19:09:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f50d6c1a-f2cf-11ea-b060-0242ac120006 0xc0026cfa30 0xc0026cfa31}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r9msj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r9msj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r9msj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026cfaa0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026cfac0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:17 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:26 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:26 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:17 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.2.55,StartTime:2020-09-09 19:09:17 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-09-09 19:09:24 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://36b0f33d1c04e22b1c044c8fc37ed2808414493e632c745f8fa9150ad0ba3024}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 9 19:09:34.774: INFO: Pod "nginx-deployment-85ddf47c5d-bzkxq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bzkxq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-fslnm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fslnm/pods/nginx-deployment-85ddf47c5d-bzkxq,UID:fe4857ef-f2cf-11ea-b060-0242ac120006,ResourceVersion:742284,Generation:0,CreationTimestamp:2020-09-09 19:09:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f50d6c1a-f2cf-11ea-b060-0242ac120006 0xc0026cfb80 0xc0026cfb81}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r9msj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r9msj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r9msj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026cfbf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026cfc10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:32 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 9 19:09:34.774: INFO: Pod "nginx-deployment-85ddf47c5d-ccldn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ccldn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-fslnm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fslnm/pods/nginx-deployment-85ddf47c5d-ccldn,UID:fe418b62-f2cf-11ea-b060-0242ac120006,ResourceVersion:742303,Generation:0,CreationTimestamp:2020-09-09 19:09:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f50d6c1a-f2cf-11ea-b060-0242ac120006 0xc0026cfc80 0xc0026cfc81}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r9msj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r9msj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r9msj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026cfcf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026cfd10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:32 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-09-09 19:09:32 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 9 19:09:34.774: INFO: Pod "nginx-deployment-85ddf47c5d-cl2mp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-cl2mp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-fslnm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fslnm/pods/nginx-deployment-85ddf47c5d-cl2mp,UID:fe43cd91-f2cf-11ea-b060-0242ac120006,ResourceVersion:742278,Generation:0,CreationTimestamp:2020-09-09 19:09:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f50d6c1a-f2cf-11ea-b060-0242ac120006 0xc0026cfdc0 0xc0026cfdc1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r9msj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r9msj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r9msj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026cfe30} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026cfe50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:32 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 9 19:09:34.774: INFO: Pod "nginx-deployment-85ddf47c5d-fmdzr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fmdzr,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-fslnm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fslnm/pods/nginx-deployment-85ddf47c5d-fmdzr,UID:fe48496f-f2cf-11ea-b060-0242ac120006,ResourceVersion:742286,Generation:0,CreationTimestamp:2020-09-09 19:09:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f50d6c1a-f2cf-11ea-b060-0242ac120006 0xc0026cfec0 0xc0026cfec1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r9msj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r9msj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r9msj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026cff30} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026cff50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:32 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 9 19:09:34.775: INFO: Pod "nginx-deployment-85ddf47c5d-gvjqs" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gvjqs,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-fslnm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fslnm/pods/nginx-deployment-85ddf47c5d-gvjqs,UID:f51d8271-f2cf-11ea-b060-0242ac120006,ResourceVersion:742146,Generation:0,CreationTimestamp:2020-09-09 19:09:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f50d6c1a-f2cf-11ea-b060-0242ac120006 0xc0026cffc0 0xc0026cffc1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r9msj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r9msj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r9msj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00213a030} {node.kubernetes.io/unreachable Exists NoExecute 0xc00213a050}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:17 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:26 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:26 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:17 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.81,StartTime:2020-09-09 19:09:17 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-09-09 19:09:24 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://2f3ad888f0fc6b63e3b389ae6bb96b2707681c18fe56d9c02e8fd4d2402a0680}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 9 19:09:34.775: INFO: Pod "nginx-deployment-85ddf47c5d-h5l4m" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-h5l4m,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-fslnm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fslnm/pods/nginx-deployment-85ddf47c5d-h5l4m,UID:fe43bc68-f2cf-11ea-b060-0242ac120006,ResourceVersion:742341,Generation:0,CreationTimestamp:2020-09-09 19:09:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f50d6c1a-f2cf-11ea-b060-0242ac120006 0xc00213a190 0xc00213a191}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r9msj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r9msj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r9msj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00213a200} {node.kubernetes.io/unreachable Exists NoExecute 0xc00213a220}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:32 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-09-09 19:09:32 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 9 19:09:34.775: INFO: Pod "nginx-deployment-85ddf47c5d-hh9sf" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-hh9sf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-fslnm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fslnm/pods/nginx-deployment-85ddf47c5d-hh9sf,UID:f52704be-f2cf-11ea-b060-0242ac120006,ResourceVersion:742162,Generation:0,CreationTimestamp:2020-09-09 19:09:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f50d6c1a-f2cf-11ea-b060-0242ac120006 0xc00213a340 0xc00213a341}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r9msj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r9msj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r9msj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00213a3b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00213a3d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:17 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:17 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.2.58,StartTime:2020-09-09 19:09:17 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-09-09 19:09:26 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://341f6943446ccab2c1745a5333941cf9f7f6c59394f407448a89832aa7164e91}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 9 19:09:34.775: INFO: Pod "nginx-deployment-85ddf47c5d-j77gn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-j77gn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-fslnm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fslnm/pods/nginx-deployment-85ddf47c5d-j77gn,UID:f526db5e-f2cf-11ea-b060-0242ac120006,ResourceVersion:742159,Generation:0,CreationTimestamp:2020-09-09 19:09:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f50d6c1a-f2cf-11ea-b060-0242ac120006 0xc00213a520 0xc00213a521}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r9msj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r9msj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r9msj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00213a590} {node.kubernetes.io/unreachable Exists NoExecute 0xc00213a5b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:17 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:17 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.2.57,StartTime:2020-09-09 19:09:17 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-09-09 19:09:26 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://87f063de960b22173ee6c3053d31d1856dc7c861aa1bd97e7908aaf312a9f9c7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 9 19:09:34.775: INFO: Pod "nginx-deployment-85ddf47c5d-jhkcq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-jhkcq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-fslnm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fslnm/pods/nginx-deployment-85ddf47c5d-jhkcq,UID:fe419679-f2cf-11ea-b060-0242ac120006,ResourceVersion:742300,Generation:0,CreationTimestamp:2020-09-09 19:09:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f50d6c1a-f2cf-11ea-b060-0242ac120006 0xc00213a670 0xc00213a671}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r9msj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r9msj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r9msj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00213a700} {node.kubernetes.io/unreachable Exists NoExecute 0xc00213a7b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:32 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-09-09 19:09:32 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 9 19:09:34.775: INFO: Pod "nginx-deployment-85ddf47c5d-lbvx9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lbvx9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-fslnm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fslnm/pods/nginx-deployment-85ddf47c5d-lbvx9,UID:fe483507-f2cf-11ea-b060-0242ac120006,ResourceVersion:742354,Generation:0,CreationTimestamp:2020-09-09 19:09:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f50d6c1a-f2cf-11ea-b060-0242ac120006 0xc00213a870 0xc00213a871}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r9msj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r9msj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r9msj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00213a8e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00213a900}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:32 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-09-09 19:09:32 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 9 19:09:34.775: INFO: Pod "nginx-deployment-85ddf47c5d-lwxbw" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lwxbw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-fslnm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fslnm/pods/nginx-deployment-85ddf47c5d-lwxbw,UID:f52055b1-f2cf-11ea-b060-0242ac120006,ResourceVersion:742156,Generation:0,CreationTimestamp:2020-09-09 19:09:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f50d6c1a-f2cf-11ea-b060-0242ac120006 0xc00213a9b0 0xc00213a9b1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r9msj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r9msj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r9msj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00213b270} {node.kubernetes.io/unreachable Exists NoExecute 0xc00213b290}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:17 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:17 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.2.56,StartTime:2020-09-09 19:09:17 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-09-09 19:09:26 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://0320c88def6d8c516f882c63e685b7ed82513de53a9e9e6fc10121d059367bab}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 9 19:09:34.776: INFO: Pod "nginx-deployment-85ddf47c5d-pwpj9" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pwpj9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-fslnm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fslnm/pods/nginx-deployment-85ddf47c5d-pwpj9,UID:f51cfac6-f2cf-11ea-b060-0242ac120006,ResourceVersion:742116,Generation:0,CreationTimestamp:2020-09-09 19:09:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f50d6c1a-f2cf-11ea-b060-0242ac120006 0xc00213b360 0xc00213b361}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r9msj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r9msj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r9msj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00213b3e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00213b8f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:17 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:22 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:22 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:17 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.2.54,StartTime:2020-09-09 19:09:17 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-09-09 19:09:20 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://6350c67090e94d8f35e8a1083bb5cb2c69a2e51ede8f258468ae5a8897a97beb}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 9 19:09:34.776: INFO: Pod "nginx-deployment-85ddf47c5d-t9phj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-t9phj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-fslnm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fslnm/pods/nginx-deployment-85ddf47c5d-t9phj,UID:fe43d331-f2cf-11ea-b060-0242ac120006,ResourceVersion:742348,Generation:0,CreationTimestamp:2020-09-09 19:09:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f50d6c1a-f2cf-11ea-b060-0242ac120006 0xc00213bae0 0xc00213bae1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r9msj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r9msj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r9msj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00213bb80} {node.kubernetes.io/unreachable Exists NoExecute 0xc00213bba0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:32 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-09-09 19:09:32 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 9 19:09:34.776: INFO: Pod "nginx-deployment-85ddf47c5d-wm424" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wm424,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-fslnm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fslnm/pods/nginx-deployment-85ddf47c5d-wm424,UID:f5205549-f2cf-11ea-b060-0242ac120006,ResourceVersion:742168,Generation:0,CreationTimestamp:2020-09-09 19:09:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f50d6c1a-f2cf-11ea-b060-0242ac120006 0xc001ed6040 0xc001ed6041}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r9msj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r9msj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r9msj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ed6130} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ed6160}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:17 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:17 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.84,StartTime:2020-09-09 19:09:17 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-09-09 19:09:26 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://46e864d9fb555c888d07ffe4b4c0aad24e81943b54c8df60328ab5ead76f24ee}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 9 19:09:34.776: INFO: Pod "nginx-deployment-85ddf47c5d-wqqqw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wqqqw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-fslnm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fslnm/pods/nginx-deployment-85ddf47c5d-wqqqw,UID:fe38d015-f2cf-11ea-b060-0242ac120006,ResourceVersion:742280,Generation:0,CreationTimestamp:2020-09-09 19:09:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f50d6c1a-f2cf-11ea-b060-0242ac120006 0xc001ed6300 0xc001ed6301}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r9msj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r9msj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r9msj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ed6370} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ed6390}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:32 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-09-09 19:09:32 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 9 19:09:34.776: INFO: Pod "nginx-deployment-85ddf47c5d-xdb4g" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-xdb4g,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-fslnm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fslnm/pods/nginx-deployment-85ddf47c5d-xdb4g,UID:fe43bb2d-f2cf-11ea-b060-0242ac120006,ResourceVersion:742313,Generation:0,CreationTimestamp:2020-09-09 19:09:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f50d6c1a-f2cf-11ea-b060-0242ac120006 0xc001ed6520 0xc001ed6521}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-r9msj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-r9msj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-r9msj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ed6590} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ed65b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:09:32 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-09-09 19:09:32 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 19:09:34.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-fslnm" for this suite. Sep 9 19:09:52.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 19:09:52.901: INFO: namespace: e2e-tests-deployment-fslnm, resource: bindings, ignored listing per whitelist Sep 9 19:09:52.908: INFO: namespace e2e-tests-deployment-fslnm deletion completed in 18.128270139s • [SLOW TEST:35.848 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 19:09:52.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Sep 9 19:09:53.331: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0a96fdf9-f2d0-11ea-88c2-0242ac110007" in namespace "e2e-tests-downward-api-zlw9c" to be "success or failure" Sep 9 19:09:53.518: INFO: Pod "downwardapi-volume-0a96fdf9-f2d0-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 187.384511ms Sep 9 19:09:55.522: INFO: Pod "downwardapi-volume-0a96fdf9-f2d0-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19107459s Sep 9 19:09:57.528: INFO: Pod "downwardapi-volume-0a96fdf9-f2d0-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.197014294s Sep 9 19:09:59.616: INFO: Pod "downwardapi-volume-0a96fdf9-f2d0-11ea-88c2-0242ac110007": Phase="Running", Reason="", readiness=true. Elapsed: 6.28575656s Sep 9 19:10:01.620: INFO: Pod "downwardapi-volume-0a96fdf9-f2d0-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.289754389s STEP: Saw pod success Sep 9 19:10:01.621: INFO: Pod "downwardapi-volume-0a96fdf9-f2d0-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 19:10:01.623: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-0a96fdf9-f2d0-11ea-88c2-0242ac110007 container client-container: STEP: delete the pod Sep 9 19:10:01.708: INFO: Waiting for pod downwardapi-volume-0a96fdf9-f2d0-11ea-88c2-0242ac110007 to disappear Sep 9 19:10:01.717: INFO: Pod downwardapi-volume-0a96fdf9-f2d0-11ea-88c2-0242ac110007 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 19:10:01.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-zlw9c" for this suite. Sep 9 19:10:07.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 19:10:07.811: INFO: namespace: e2e-tests-downward-api-zlw9c, resource: bindings, ignored listing per whitelist Sep 9 19:10:07.852: INFO: namespace e2e-tests-downward-api-zlw9c deletion completed in 6.091559946s • [SLOW TEST:14.944 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 19:10:07.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Sep 9 19:10:07.958: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1352ab4e-f2d0-11ea-88c2-0242ac110007" in namespace "e2e-tests-downward-api-kjhfn" to be "success or failure" Sep 9 19:10:07.975: INFO: Pod "downwardapi-volume-1352ab4e-f2d0-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 16.889903ms Sep 9 19:10:10.030: INFO: Pod "downwardapi-volume-1352ab4e-f2d0-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071489365s Sep 9 19:10:12.034: INFO: Pod "downwardapi-volume-1352ab4e-f2d0-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075898961s STEP: Saw pod success Sep 9 19:10:12.034: INFO: Pod "downwardapi-volume-1352ab4e-f2d0-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 19:10:12.037: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-1352ab4e-f2d0-11ea-88c2-0242ac110007 container client-container: STEP: delete the pod Sep 9 19:10:12.124: INFO: Waiting for pod downwardapi-volume-1352ab4e-f2d0-11ea-88c2-0242ac110007 to disappear Sep 9 19:10:12.239: INFO: Pod downwardapi-volume-1352ab4e-f2d0-11ea-88c2-0242ac110007 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 19:10:12.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-kjhfn" for this suite. Sep 9 19:10:18.318: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 19:10:18.355: INFO: namespace: e2e-tests-downward-api-kjhfn, resource: bindings, ignored listing per whitelist Sep 9 19:10:18.420: INFO: namespace e2e-tests-downward-api-kjhfn deletion completed in 6.178038361s • [SLOW TEST:10.568 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 19:10:18.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-19a415c1-f2d0-11ea-88c2-0242ac110007 STEP: Creating a pod to test consume secrets Sep 9 19:10:18.568: INFO: Waiting up to 5m0s for pod "pod-secrets-19a547f4-f2d0-11ea-88c2-0242ac110007" in namespace "e2e-tests-secrets-nbddq" to be "success or failure" Sep 9 19:10:18.572: INFO: Pod "pod-secrets-19a547f4-f2d0-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 3.9681ms Sep 9 19:10:20.916: INFO: Pod "pod-secrets-19a547f4-f2d0-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.348432962s Sep 9 19:10:22.923: INFO: Pod "pod-secrets-19a547f4-f2d0-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.355083172s STEP: Saw pod success Sep 9 19:10:22.923: INFO: Pod "pod-secrets-19a547f4-f2d0-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 19:10:22.938: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-19a547f4-f2d0-11ea-88c2-0242ac110007 container secret-volume-test: STEP: delete the pod Sep 9 19:10:22.982: INFO: Waiting for pod pod-secrets-19a547f4-f2d0-11ea-88c2-0242ac110007 to disappear Sep 9 19:10:23.020: INFO: Pod pod-secrets-19a547f4-f2d0-11ea-88c2-0242ac110007 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 19:10:23.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-nbddq" for this suite. Sep 9 19:10:29.045: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 19:10:29.087: INFO: namespace: e2e-tests-secrets-nbddq, resource: bindings, ignored listing per whitelist Sep 9 19:10:29.120: INFO: namespace e2e-tests-secrets-nbddq deletion completed in 6.097136218s • [SLOW TEST:10.700 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 19:10:29.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-2002b53f-f2d0-11ea-88c2-0242ac110007 STEP: Creating a pod to test consume configMaps Sep 9 19:10:29.258: INFO: Waiting up to 5m0s for pod "pod-configmaps-20048e84-f2d0-11ea-88c2-0242ac110007" in namespace "e2e-tests-configmap-br2wf" to be "success or failure" Sep 9 19:10:29.273: INFO: Pod "pod-configmaps-20048e84-f2d0-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 15.141919ms Sep 9 19:10:31.277: INFO: Pod "pod-configmaps-20048e84-f2d0-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019061984s Sep 9 19:10:33.325: INFO: Pod "pod-configmaps-20048e84-f2d0-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067259492s STEP: Saw pod success Sep 9 19:10:33.325: INFO: Pod "pod-configmaps-20048e84-f2d0-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 19:10:33.328: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-20048e84-f2d0-11ea-88c2-0242ac110007 container configmap-volume-test: STEP: delete the pod Sep 9 19:10:33.383: INFO: Waiting for pod pod-configmaps-20048e84-f2d0-11ea-88c2-0242ac110007 to disappear Sep 9 19:10:33.392: INFO: Pod pod-configmaps-20048e84-f2d0-11ea-88c2-0242ac110007 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 19:10:33.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-br2wf" for this suite. Sep 9 19:10:39.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 19:10:39.441: INFO: namespace: e2e-tests-configmap-br2wf, resource: bindings, ignored listing per whitelist Sep 9 19:10:39.479: INFO: namespace e2e-tests-configmap-br2wf deletion completed in 6.083035568s • [SLOW TEST:10.358 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 19:10:39.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Sep 9 19:10:39.568: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2629d2f7-f2d0-11ea-88c2-0242ac110007" in namespace "e2e-tests-downward-api-rfvws" to be "success or failure" Sep 9 19:10:39.617: INFO: Pod "downwardapi-volume-2629d2f7-f2d0-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 48.864173ms Sep 9 19:10:41.677: INFO: Pod "downwardapi-volume-2629d2f7-f2d0-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108708054s Sep 9 19:10:43.681: INFO: Pod "downwardapi-volume-2629d2f7-f2d0-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.113194322s STEP: Saw pod success Sep 9 19:10:43.681: INFO: Pod "downwardapi-volume-2629d2f7-f2d0-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 19:10:43.684: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-2629d2f7-f2d0-11ea-88c2-0242ac110007 container client-container: STEP: delete the pod Sep 9 19:10:43.714: INFO: Waiting for pod downwardapi-volume-2629d2f7-f2d0-11ea-88c2-0242ac110007 to disappear Sep 9 19:10:43.734: INFO: Pod downwardapi-volume-2629d2f7-f2d0-11ea-88c2-0242ac110007 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 19:10:43.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-rfvws" for this suite. Sep 9 19:10:49.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 19:10:49.770: INFO: namespace: e2e-tests-downward-api-rfvws, resource: bindings, ignored listing per whitelist Sep 9 19:10:49.825: INFO: namespace e2e-tests-downward-api-rfvws deletion completed in 6.086412211s • [SLOW TEST:10.346 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 19:10:49.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Sep 9 19:10:50.185: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-jqftg,SelfLink:/api/v1/namespaces/e2e-tests-watch-jqftg/configmaps/e2e-watch-test-watch-closed,UID:2c724404-f2d0-11ea-b060-0242ac120006,ResourceVersion:742864,Generation:0,CreationTimestamp:2020-09-09 19:10:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Sep 9 19:10:50.186: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-jqftg,SelfLink:/api/v1/namespaces/e2e-tests-watch-jqftg/configmaps/e2e-watch-test-watch-closed,UID:2c724404-f2d0-11ea-b060-0242ac120006,ResourceVersion:742865,Generation:0,CreationTimestamp:2020-09-09 19:10:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Sep 9 19:10:50.205: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-jqftg,SelfLink:/api/v1/namespaces/e2e-tests-watch-jqftg/configmaps/e2e-watch-test-watch-closed,UID:2c724404-f2d0-11ea-b060-0242ac120006,ResourceVersion:742866,Generation:0,CreationTimestamp:2020-09-09 19:10:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Sep 9 19:10:50.205: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-jqftg,SelfLink:/api/v1/namespaces/e2e-tests-watch-jqftg/configmaps/e2e-watch-test-watch-closed,UID:2c724404-f2d0-11ea-b060-0242ac120006,ResourceVersion:742867,Generation:0,CreationTimestamp:2020-09-09 19:10:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 19:10:50.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-jqftg" for this suite. Sep 9 19:10:56.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 19:10:56.326: INFO: namespace: e2e-tests-watch-jqftg, resource: bindings, ignored listing per whitelist Sep 9 19:10:56.326: INFO: namespace e2e-tests-watch-jqftg deletion completed in 6.08535232s • [SLOW TEST:6.500 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 19:10:56.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-xn22f STEP: creating a selector STEP: Creating the service pods in kubernetes Sep 9 19:10:56.458: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Sep 9 19:11:22.600: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.2.72 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-xn22f PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 9 19:11:22.600: INFO: >>> kubeConfig: /root/.kube/config I0909 19:11:22.636562 6 log.go:172] (0xc00088d810) (0xc00224d040) Create stream I0909 19:11:22.636602 6 log.go:172] (0xc00088d810) (0xc00224d040) Stream added, broadcasting: 1 I0909 19:11:22.639820 6 log.go:172] (0xc00088d810) Reply frame received for 1 I0909 19:11:22.639874 6 log.go:172] (0xc00088d810) (0xc00224d0e0) Create stream I0909 19:11:22.639896 6 log.go:172] (0xc00088d810) (0xc00224d0e0) Stream added, broadcasting: 3 I0909 19:11:22.640988 6 log.go:172] (0xc00088d810) Reply frame received for 3 I0909 19:11:22.641027 6 log.go:172] (0xc00088d810) (0xc00224d220) Create stream I0909 19:11:22.641041 6 log.go:172] (0xc00088d810) (0xc00224d220) Stream added, broadcasting: 5 I0909 19:11:22.641981 6 log.go:172] (0xc00088d810) Reply frame received for 5 I0909 19:11:23.717333 6 log.go:172] (0xc00088d810) Data frame received for 3 I0909 19:11:23.717389 6 log.go:172] (0xc00224d0e0) (3) Data frame handling I0909 19:11:23.717420 6 log.go:172] (0xc00224d0e0) (3) Data frame sent I0909 19:11:23.717456 6 log.go:172] (0xc00088d810) Data frame received for 3 I0909 19:11:23.717471 6 log.go:172] (0xc00224d0e0) (3) Data frame handling I0909 19:11:23.717564 6 log.go:172] (0xc00088d810) Data frame received for 5 I0909 19:11:23.717607 6 log.go:172] (0xc00224d220) (5) Data frame handling I0909 19:11:23.719701 6 log.go:172] (0xc00088d810) Data frame received for 1 I0909 19:11:23.719745 6 log.go:172] (0xc00224d040) (1) Data frame handling I0909 19:11:23.719779 6 log.go:172] (0xc00224d040) (1) Data frame sent I0909 19:11:23.719821 6 log.go:172] (0xc00088d810) (0xc00224d040) Stream removed, broadcasting: 1 I0909 19:11:23.719856 6 log.go:172] (0xc00088d810) Go away received I0909 19:11:23.719953 6 log.go:172] (0xc00088d810) (0xc00224d040) Stream removed, broadcasting: 1 I0909 19:11:23.719976 6 log.go:172] (0xc00088d810) (0xc00224d0e0) Stream removed, broadcasting: 3 I0909 19:11:23.719987 6 log.go:172] (0xc00088d810) (0xc00224d220) Stream removed, broadcasting: 5 Sep 9 19:11:23.720: INFO: Found all expected endpoints: [netserver-0] Sep 9 19:11:23.723: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.1.103 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-xn22f PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 9 19:11:23.723: INFO: >>> kubeConfig: /root/.kube/config I0909 19:11:23.758348 6 log.go:172] (0xc000fd0580) (0xc00218af00) Create stream I0909 19:11:23.758392 6 log.go:172] (0xc000fd0580) (0xc00218af00) Stream added, broadcasting: 1 I0909 19:11:23.765781 6 log.go:172] (0xc000fd0580) Reply frame received for 1 I0909 19:11:23.765862 6 log.go:172] (0xc000fd0580) (0xc002168460) Create stream I0909 19:11:23.765894 6 log.go:172] (0xc000fd0580) (0xc002168460) Stream added, broadcasting: 3 I0909 19:11:23.767096 6 log.go:172] (0xc000fd0580) Reply frame received for 3 I0909 19:11:23.767134 6 log.go:172] (0xc000fd0580) (0xc00224d2c0) Create stream I0909 19:11:23.767146 6 log.go:172] (0xc000fd0580) (0xc00224d2c0) Stream added, broadcasting: 5 I0909 19:11:23.768211 6 log.go:172] (0xc000fd0580) Reply frame received for 5 I0909 19:11:24.829593 6 log.go:172] (0xc000fd0580) Data frame received for 3 I0909 19:11:24.829641 6 log.go:172] (0xc002168460) (3) Data frame handling I0909 19:11:24.829685 6 log.go:172] (0xc002168460) (3) Data frame sent I0909 19:11:24.829726 6 log.go:172] (0xc000fd0580) Data frame received for 3 I0909 19:11:24.829766 6 log.go:172] (0xc002168460) (3) Data frame handling I0909 19:11:24.830048 6 log.go:172] (0xc000fd0580) Data frame received for 5 I0909 19:11:24.830083 6 log.go:172] (0xc00224d2c0) (5) Data frame handling I0909 19:11:24.832106 6 log.go:172] (0xc000fd0580) Data frame received for 1 I0909 19:11:24.832143 6 log.go:172] (0xc00218af00) (1) Data frame handling I0909 19:11:24.832168 6 log.go:172] (0xc00218af00) (1) Data frame sent I0909 19:11:24.832191 6 log.go:172] (0xc000fd0580) (0xc00218af00) Stream removed, broadcasting: 1 I0909 19:11:24.832208 6 log.go:172] (0xc000fd0580) Go away received I0909 19:11:24.832366 6 log.go:172] (0xc000fd0580) (0xc00218af00) Stream removed, broadcasting: 1 I0909 19:11:24.832406 6 log.go:172] (0xc000fd0580) (0xc002168460) Stream removed, broadcasting: 3 I0909 19:11:24.832432 6 log.go:172] (0xc000fd0580) (0xc00224d2c0) Stream removed, broadcasting: 5 Sep 9 19:11:24.832: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 19:11:24.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-xn22f" for this suite. Sep 9 19:11:46.853: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 19:11:46.920: INFO: namespace: e2e-tests-pod-network-test-xn22f, resource: bindings, ignored listing per whitelist Sep 9 19:11:46.977: INFO: namespace e2e-tests-pod-network-test-xn22f deletion completed in 22.140361956s • [SLOW TEST:50.651 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 19:11:46.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components Sep 9 19:11:47.087: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Sep 9 19:11:47.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-jjt6p' Sep 9 19:11:47.373: INFO: stderr: "" Sep 9 19:11:47.373: INFO: stdout: "service/redis-slave created\n" Sep 9 19:11:47.374: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Sep 9 19:11:47.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-jjt6p' Sep 9 19:11:47.691: INFO: stderr: "" Sep 9 19:11:47.691: INFO: stdout: "service/redis-master created\n" Sep 9 19:11:47.691: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Sep 9 19:11:47.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-jjt6p' Sep 9 19:11:47.974: INFO: stderr: "" Sep 9 19:11:47.974: INFO: stdout: "service/frontend created\n" Sep 9 19:11:47.975: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Sep 9 19:11:47.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-jjt6p' Sep 9 19:11:48.240: INFO: stderr: "" Sep 9 19:11:48.240: INFO: stdout: "deployment.extensions/frontend created\n" Sep 9 19:11:48.240: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Sep 9 19:11:48.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-jjt6p' Sep 9 19:11:48.523: INFO: stderr: "" Sep 9 19:11:48.524: INFO: stdout: "deployment.extensions/redis-master created\n" Sep 9 19:11:48.524: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Sep 9 19:11:48.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-jjt6p' Sep 9 19:11:48.865: INFO: stderr: "" Sep 9 19:11:48.865: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app Sep 9 19:11:48.865: INFO: Waiting for all frontend pods to be Running. Sep 9 19:11:58.916: INFO: Waiting for frontend to serve content. Sep 9 19:11:58.935: INFO: Trying to add a new entry to the guestbook. Sep 9 19:11:58.990: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Sep 9 19:11:59.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-jjt6p' Sep 9 19:11:59.213: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 9 19:11:59.213: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Sep 9 19:11:59.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-jjt6p' Sep 9 19:11:59.413: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 9 19:11:59.413: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Sep 9 19:11:59.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-jjt6p' Sep 9 19:11:59.573: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 9 19:11:59.573: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Sep 9 19:11:59.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-jjt6p' Sep 9 19:11:59.679: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 9 19:11:59.679: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources Sep 9 19:11:59.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-jjt6p' Sep 9 19:11:59.796: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 9 19:11:59.796: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources Sep 9 19:11:59.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-jjt6p' Sep 9 19:12:00.137: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 9 19:12:00.137: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 19:12:00.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-jjt6p" for this suite. Sep 9 19:12:42.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 19:12:42.637: INFO: namespace: e2e-tests-kubectl-jjt6p, resource: bindings, ignored listing per whitelist Sep 9 19:12:42.639: INFO: namespace e2e-tests-kubectl-jjt6p deletion completed in 42.281700437s • [SLOW TEST:55.661 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 19:12:42.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Sep 9 19:12:42.778: INFO: Waiting up to 5m0s for pod "pod-6f9a393c-f2d0-11ea-88c2-0242ac110007" in namespace "e2e-tests-emptydir-xkv7k" to be "success or failure" Sep 9 19:12:42.796: INFO: Pod "pod-6f9a393c-f2d0-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 17.611388ms Sep 9 19:12:44.800: INFO: Pod "pod-6f9a393c-f2d0-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021749155s Sep 9 19:12:46.804: INFO: Pod "pod-6f9a393c-f2d0-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025634515s STEP: Saw pod success Sep 9 19:12:46.804: INFO: Pod "pod-6f9a393c-f2d0-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 19:12:46.807: INFO: Trying to get logs from node hunter-worker pod pod-6f9a393c-f2d0-11ea-88c2-0242ac110007 container test-container: STEP: delete the pod Sep 9 19:12:47.015: INFO: Waiting for pod pod-6f9a393c-f2d0-11ea-88c2-0242ac110007 to disappear Sep 9 19:12:47.040: INFO: Pod pod-6f9a393c-f2d0-11ea-88c2-0242ac110007 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 19:12:47.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-xkv7k" for this suite. Sep 9 19:12:53.068: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 19:12:53.117: INFO: namespace: e2e-tests-emptydir-xkv7k, resource: bindings, ignored listing per whitelist Sep 9 19:12:53.150: INFO: namespace e2e-tests-emptydir-xkv7k deletion completed in 6.106274341s • [SLOW TEST:10.510 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 19:12:53.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Sep 9 19:12:57.893: INFO: Successfully updated pod "labelsupdate75e28101-f2d0-11ea-88c2-0242ac110007" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 19:13:01.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-2pzjf" for this suite. Sep 9 19:13:23.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 19:13:24.019: INFO: namespace: e2e-tests-downward-api-2pzjf, resource: bindings, ignored listing per whitelist Sep 9 19:13:24.063: INFO: namespace e2e-tests-downward-api-2pzjf deletion completed in 22.090698134s • [SLOW TEST:30.913 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 19:13:24.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Sep 9 19:13:24.162: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 19:13:31.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-6fwbq" for this suite. Sep 9 19:13:37.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 19:13:37.677: INFO: namespace: e2e-tests-init-container-6fwbq, resource: bindings, ignored listing per whitelist Sep 9 19:13:37.682: INFO: namespace e2e-tests-init-container-6fwbq deletion completed in 6.111055626s • [SLOW TEST:13.619 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 19:13:37.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 19:13:37.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-r6xk6" for this suite. Sep 9 19:13:59.853: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 19:13:59.920: INFO: namespace: e2e-tests-pods-r6xk6, resource: bindings, ignored listing per whitelist Sep 9 19:13:59.934: INFO: namespace e2e-tests-pods-r6xk6 deletion completed in 22.10753422s • [SLOW TEST:22.252 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 19:13:59.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-dk8j STEP: Creating a pod to test atomic-volume-subpath Sep 9 19:14:00.070: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-dk8j" in namespace "e2e-tests-subpath-m8lsx" to be "success or failure" Sep 9 19:14:00.105: INFO: Pod "pod-subpath-test-configmap-dk8j": Phase="Pending", Reason="", readiness=false. Elapsed: 35.2377ms Sep 9 19:14:02.110: INFO: Pod "pod-subpath-test-configmap-dk8j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039485411s Sep 9 19:14:04.113: INFO: Pod "pod-subpath-test-configmap-dk8j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04288406s Sep 9 19:14:06.117: INFO: Pod "pod-subpath-test-configmap-dk8j": Phase="Running", Reason="", readiness=true. Elapsed: 6.047199604s Sep 9 19:14:08.122: INFO: Pod "pod-subpath-test-configmap-dk8j": Phase="Running", Reason="", readiness=false. Elapsed: 8.051575511s Sep 9 19:14:10.126: INFO: Pod "pod-subpath-test-configmap-dk8j": Phase="Running", Reason="", readiness=false. Elapsed: 10.055899225s Sep 9 19:14:12.129: INFO: Pod "pod-subpath-test-configmap-dk8j": Phase="Running", Reason="", readiness=false. Elapsed: 12.059333079s Sep 9 19:14:14.133: INFO: Pod "pod-subpath-test-configmap-dk8j": Phase="Running", Reason="", readiness=false. Elapsed: 14.063171066s Sep 9 19:14:16.137: INFO: Pod "pod-subpath-test-configmap-dk8j": Phase="Running", Reason="", readiness=false. Elapsed: 16.067010623s Sep 9 19:14:18.142: INFO: Pod "pod-subpath-test-configmap-dk8j": Phase="Running", Reason="", readiness=false. Elapsed: 18.071615697s Sep 9 19:14:20.146: INFO: Pod "pod-subpath-test-configmap-dk8j": Phase="Running", Reason="", readiness=false. Elapsed: 20.075565252s Sep 9 19:14:22.150: INFO: Pod "pod-subpath-test-configmap-dk8j": Phase="Running", Reason="", readiness=false. Elapsed: 22.079713445s Sep 9 19:14:24.154: INFO: Pod "pod-subpath-test-configmap-dk8j": Phase="Running", Reason="", readiness=false. Elapsed: 24.08420287s Sep 9 19:14:26.158: INFO: Pod "pod-subpath-test-configmap-dk8j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.087909528s STEP: Saw pod success Sep 9 19:14:26.158: INFO: Pod "pod-subpath-test-configmap-dk8j" satisfied condition "success or failure" Sep 9 19:14:26.160: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-dk8j container test-container-subpath-configmap-dk8j: STEP: delete the pod Sep 9 19:14:26.191: INFO: Waiting for pod pod-subpath-test-configmap-dk8j to disappear Sep 9 19:14:26.206: INFO: Pod pod-subpath-test-configmap-dk8j no longer exists STEP: Deleting pod pod-subpath-test-configmap-dk8j Sep 9 19:14:26.206: INFO: Deleting pod "pod-subpath-test-configmap-dk8j" in namespace "e2e-tests-subpath-m8lsx" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 19:14:26.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-m8lsx" for this suite. Sep 9 19:14:32.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 19:14:32.323: INFO: namespace: e2e-tests-subpath-m8lsx, resource: bindings, ignored listing per whitelist Sep 9 19:14:32.347: INFO: namespace e2e-tests-subpath-m8lsx deletion completed in 6.135334476s • [SLOW TEST:32.413 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 19:14:32.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info Sep 9 19:14:32.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Sep 9 19:14:32.602: INFO: stderr: "" Sep 9 19:14:32.602: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:45441\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:45441/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 19:14:32.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-p7sdn" for this suite. Sep 9 19:14:38.623: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 19:14:38.692: INFO: namespace: e2e-tests-kubectl-p7sdn, resource: bindings, ignored listing per whitelist Sep 9 19:14:38.718: INFO: namespace e2e-tests-kubectl-p7sdn deletion completed in 6.112087643s • [SLOW TEST:6.371 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 19:14:38.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Sep 9 19:14:38.807: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 9 19:14:38.815: INFO: Waiting for terminating namespaces to be deleted... Sep 9 19:14:38.817: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Sep 9 19:14:38.822: INFO: kube-proxy-t9g4m from kube-system started at 2020-09-05 13:37:20 +0000 UTC (1 container statuses recorded) Sep 9 19:14:38.822: INFO: Container kube-proxy ready: true, restart count 0 Sep 9 19:14:38.822: INFO: kindnet-4qkqp from kube-system started at 2020-09-05 13:37:22 +0000 UTC (1 container statuses recorded) Sep 9 19:14:38.822: INFO: Container kindnet-cni ready: true, restart count 0 Sep 9 19:14:38.822: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Sep 9 19:14:38.827: INFO: kube-proxy-vl5mq from kube-system started at 2020-09-05 13:37:20 +0000 UTC (1 container statuses recorded) Sep 9 19:14:38.827: INFO: Container kube-proxy ready: true, restart count 0 Sep 9 19:14:38.827: INFO: kindnet-z7tw7 from kube-system started at 2020-09-05 13:37:22 +0000 UTC (1 container statuses recorded) Sep 9 19:14:38.827: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-b72ee3c2-f2d0-11ea-88c2-0242ac110007 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-b72ee3c2-f2d0-11ea-88c2-0242ac110007 off the node hunter-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-b72ee3c2-f2d0-11ea-88c2-0242ac110007 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 19:14:47.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-rp6cm" for this suite. Sep 9 19:14:57.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 19:14:57.137: INFO: namespace: e2e-tests-sched-pred-rp6cm, resource: bindings, ignored listing per whitelist Sep 9 19:14:57.159: INFO: namespace e2e-tests-sched-pred-rp6cm deletion completed in 10.126961789s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:18.442 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 19:14:57.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0909 19:15:09.286322 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Sep 9 19:15:09.286: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 19:15:09.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-nbqrt" for this suite. Sep 9 19:15:17.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 19:15:17.399: INFO: namespace: e2e-tests-gc-nbqrt, resource: bindings, ignored listing per whitelist Sep 9 19:15:17.400: INFO: namespace e2e-tests-gc-nbqrt deletion completed in 8.109782086s • [SLOW TEST:20.240 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 19:15:17.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Sep 9 19:15:21.527: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-cbd14978-f2d0-11ea-88c2-0242ac110007,GenerateName:,Namespace:e2e-tests-events-tbcdv,SelfLink:/api/v1/namespaces/e2e-tests-events-tbcdv/pods/send-events-cbd14978-f2d0-11ea-88c2-0242ac110007,UID:cbd5bfd5-f2d0-11ea-b060-0242ac120006,ResourceVersion:744046,Generation:0,CreationTimestamp:2020-09-09 19:15:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 483156661,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wv5jf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wv5jf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-wv5jf true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ee9fc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ee9fe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:15:17 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:15:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:15:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:15:17 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.116,StartTime:2020-09-09 19:15:17 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-09-09 19:15:20 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://4b373fd4800f551df8447c2015683b911857a5abcc0047267d44536a198c70ac}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Sep 9 19:15:23.532: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Sep 9 19:15:25.537: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 19:15:25.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-tbcdv" for this suite. Sep 9 19:16:11.560: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 19:16:11.573: INFO: namespace: e2e-tests-events-tbcdv, resource: bindings, ignored listing per whitelist Sep 9 19:16:11.642: INFO: namespace e2e-tests-events-tbcdv deletion completed in 46.091749518s • [SLOW TEST:54.242 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 19:16:11.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Sep 9 19:16:11.867: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"ec31de8c-f2d0-11ea-b060-0242ac120006", Controller:(*bool)(0xc002607fa2), BlockOwnerDeletion:(*bool)(0xc002607fa3)}} Sep 9 19:16:11.885: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"ec2da1fb-f2d0-11ea-b060-0242ac120006", Controller:(*bool)(0xc001f77a32), BlockOwnerDeletion:(*bool)(0xc001f77a33)}} Sep 9 19:16:11.944: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"ec2e2590-f2d0-11ea-b060-0242ac120006", Controller:(*bool)(0xc0022f5712), BlockOwnerDeletion:(*bool)(0xc0022f5713)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 19:16:16.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-kgfnx" for this suite. Sep 9 19:16:23.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 19:16:23.049: INFO: namespace: e2e-tests-gc-kgfnx, resource: bindings, ignored listing per whitelist Sep 9 19:16:23.085: INFO: namespace e2e-tests-gc-kgfnx deletion completed in 6.092390288s • [SLOW TEST:11.444 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 19:16:23.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-f2fcbb6a-f2d0-11ea-88c2-0242ac110007 STEP: Creating secret with name secret-projected-all-test-volume-f2fcbb45-f2d0-11ea-88c2-0242ac110007 STEP: Creating a pod to test Check all projections for projected volume plugin Sep 9 19:16:23.227: INFO: Waiting up to 5m0s for pod "projected-volume-f2fcbaca-f2d0-11ea-88c2-0242ac110007" in namespace "e2e-tests-projected-8qhl7" to be "success or failure" Sep 9 19:16:23.245: INFO: Pod "projected-volume-f2fcbaca-f2d0-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 18.49079ms Sep 9 19:16:25.249: INFO: Pod "projected-volume-f2fcbaca-f2d0-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022531277s Sep 9 19:16:27.261: INFO: Pod "projected-volume-f2fcbaca-f2d0-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034564285s STEP: Saw pod success Sep 9 19:16:27.261: INFO: Pod "projected-volume-f2fcbaca-f2d0-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 19:16:27.264: INFO: Trying to get logs from node hunter-worker pod projected-volume-f2fcbaca-f2d0-11ea-88c2-0242ac110007 container projected-all-volume-test: STEP: delete the pod Sep 9 19:16:27.281: INFO: Waiting for pod projected-volume-f2fcbaca-f2d0-11ea-88c2-0242ac110007 to disappear Sep 9 19:16:27.299: INFO: Pod projected-volume-f2fcbaca-f2d0-11ea-88c2-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 19:16:27.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8qhl7" for this suite. Sep 9 19:16:33.320: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 19:16:33.335: INFO: namespace: e2e-tests-projected-8qhl7, resource: bindings, ignored listing per whitelist Sep 9 19:16:33.395: INFO: namespace e2e-tests-projected-8qhl7 deletion completed in 6.091883515s • [SLOW TEST:10.309 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 19:16:33.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-f91d071e-f2d0-11ea-88c2-0242ac110007 Sep 9 19:16:33.579: INFO: Pod name my-hostname-basic-f91d071e-f2d0-11ea-88c2-0242ac110007: Found 0 pods out of 1 Sep 9 19:16:38.584: INFO: Pod name my-hostname-basic-f91d071e-f2d0-11ea-88c2-0242ac110007: Found 1 pods out of 1 Sep 9 19:16:38.584: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-f91d071e-f2d0-11ea-88c2-0242ac110007" are running Sep 9 19:16:38.587: INFO: Pod "my-hostname-basic-f91d071e-f2d0-11ea-88c2-0242ac110007-j669t" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-09 19:16:33 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-09 19:16:36 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-09 19:16:36 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-09 19:16:33 +0000 UTC Reason: Message:}]) Sep 9 19:16:38.587: INFO: Trying to dial the pod Sep 9 19:16:43.599: INFO: Controller my-hostname-basic-f91d071e-f2d0-11ea-88c2-0242ac110007: Got expected result from replica 1 [my-hostname-basic-f91d071e-f2d0-11ea-88c2-0242ac110007-j669t]: "my-hostname-basic-f91d071e-f2d0-11ea-88c2-0242ac110007-j669t", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 19:16:43.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-6bf5t" for this suite. Sep 9 19:16:49.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 19:16:49.642: INFO: namespace: e2e-tests-replication-controller-6bf5t, resource: bindings, ignored listing per whitelist Sep 9 19:16:49.690: INFO: namespace e2e-tests-replication-controller-6bf5t deletion completed in 6.087617423s • [SLOW TEST:16.296 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 19:16:49.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Sep 9 19:16:49.821: INFO: Waiting up to 5m0s for pod "pod-02d57d3b-f2d1-11ea-88c2-0242ac110007" in namespace "e2e-tests-emptydir-25k2q" to be "success or failure" Sep 9 19:16:49.824: INFO: Pod "pod-02d57d3b-f2d1-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 3.073592ms Sep 9 19:16:51.903: INFO: Pod "pod-02d57d3b-f2d1-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08135069s Sep 9 19:16:53.907: INFO: Pod "pod-02d57d3b-f2d1-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.085330955s STEP: Saw pod success Sep 9 19:16:53.907: INFO: Pod "pod-02d57d3b-f2d1-11ea-88c2-0242ac110007" satisfied condition "success or failure" Sep 9 19:16:53.909: INFO: Trying to get logs from node hunter-worker pod pod-02d57d3b-f2d1-11ea-88c2-0242ac110007 container test-container: STEP: delete the pod Sep 9 19:16:53.989: INFO: Waiting for pod pod-02d57d3b-f2d1-11ea-88c2-0242ac110007 to disappear Sep 9 19:16:54.018: INFO: Pod pod-02d57d3b-f2d1-11ea-88c2-0242ac110007 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 19:16:54.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-25k2q" for this suite. Sep 9 19:17:00.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 19:17:00.089: INFO: namespace: e2e-tests-emptydir-25k2q, resource: bindings, ignored listing per whitelist Sep 9 19:17:00.196: INFO: namespace e2e-tests-emptydir-25k2q deletion completed in 6.173919994s • [SLOW TEST:10.505 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 19:17:00.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 19:17:04.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-vkg2t" for this suite. Sep 9 19:17:58.464: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 19:17:58.530: INFO: namespace: e2e-tests-kubelet-test-vkg2t, resource: bindings, ignored listing per whitelist Sep 9 19:17:58.577: INFO: namespace e2e-tests-kubelet-test-vkg2t deletion completed in 54.124625601s • [SLOW TEST:58.382 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 19:17:58.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed Sep 9 19:18:02.743: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-2be5756e-f2d1-11ea-88c2-0242ac110007", GenerateName:"", Namespace:"e2e-tests-pods-qzszx", SelfLink:"/api/v1/namespaces/e2e-tests-pods-qzszx/pods/pod-submit-remove-2be5756e-f2d1-11ea-88c2-0242ac110007", UID:"2be7e31b-f2d1-11ea-b060-0242ac120006", ResourceVersion:"744536", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63735275878, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"676630227"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-kblhs", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001601a40), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-kblhs", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0019a6e58), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001dd6c00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0019a72f0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0019a7320)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0019a7328), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0019a732c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735275878, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735275881, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735275881, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735275878, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.7", PodIP:"10.244.2.88", StartTime:(*v1.Time)(0xc000c4f200), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc000c4f220), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://d9bd46fd03fdf410767d66ea4db8d1dc4363d15b6f22613c7e7f6184821202fe"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Sep 9 19:18:10.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-qzszx" for this suite. Sep 9 19:18:16.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 9 19:18:16.108: INFO: namespace: e2e-tests-pods-qzszx, resource: bindings, ignored listing per whitelist Sep 9 19:18:16.167: INFO: namespace e2e-tests-pods-qzszx deletion completed in 6.091903956s • [SLOW TEST:17.590 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Sep 9 19:18:16.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Sep 9 19:18:16.276: INFO: (0) /api/v1/nodes/hunter-worker/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-j5m64/secret-test-3a1cbdb2-f2d1-11ea-88c2-0242ac110007
STEP: Creating a pod to test consume secrets
Sep  9 19:18:22.547: INFO: Waiting up to 5m0s for pod "pod-configmaps-3a1ea156-f2d1-11ea-88c2-0242ac110007" in namespace "e2e-tests-secrets-j5m64" to be "success or failure"
Sep  9 19:18:22.551: INFO: Pod "pod-configmaps-3a1ea156-f2d1-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.18125ms
Sep  9 19:18:24.569: INFO: Pod "pod-configmaps-3a1ea156-f2d1-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022300496s
Sep  9 19:18:26.578: INFO: Pod "pod-configmaps-3a1ea156-f2d1-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03144571s
STEP: Saw pod success
Sep  9 19:18:26.578: INFO: Pod "pod-configmaps-3a1ea156-f2d1-11ea-88c2-0242ac110007" satisfied condition "success or failure"
Sep  9 19:18:26.581: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-3a1ea156-f2d1-11ea-88c2-0242ac110007 container env-test: 
STEP: delete the pod
Sep  9 19:18:26.625: INFO: Waiting for pod pod-configmaps-3a1ea156-f2d1-11ea-88c2-0242ac110007 to disappear
Sep  9 19:18:26.630: INFO: Pod pod-configmaps-3a1ea156-f2d1-11ea-88c2-0242ac110007 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:18:26.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-j5m64" for this suite.
Sep  9 19:18:32.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:18:32.685: INFO: namespace: e2e-tests-secrets-j5m64, resource: bindings, ignored listing per whitelist
Sep  9 19:18:32.722: INFO: namespace e2e-tests-secrets-j5m64 deletion completed in 6.087880442s

• [SLOW TEST:10.271 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:18:32.722: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-403fb79c-f2d1-11ea-88c2-0242ac110007
STEP: Creating a pod to test consume secrets
Sep  9 19:18:32.867: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-40405fde-f2d1-11ea-88c2-0242ac110007" in namespace "e2e-tests-projected-z4gzs" to be "success or failure"
Sep  9 19:18:32.869: INFO: Pod "pod-projected-secrets-40405fde-f2d1-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.608623ms
Sep  9 19:18:34.873: INFO: Pod "pod-projected-secrets-40405fde-f2d1-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00669146s
Sep  9 19:18:36.877: INFO: Pod "pod-projected-secrets-40405fde-f2d1-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010774329s
STEP: Saw pod success
Sep  9 19:18:36.877: INFO: Pod "pod-projected-secrets-40405fde-f2d1-11ea-88c2-0242ac110007" satisfied condition "success or failure"
Sep  9 19:18:36.880: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-40405fde-f2d1-11ea-88c2-0242ac110007 container projected-secret-volume-test: 
STEP: delete the pod
Sep  9 19:18:36.917: INFO: Waiting for pod pod-projected-secrets-40405fde-f2d1-11ea-88c2-0242ac110007 to disappear
Sep  9 19:18:36.974: INFO: Pod pod-projected-secrets-40405fde-f2d1-11ea-88c2-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:18:36.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-z4gzs" for this suite.
Sep  9 19:18:42.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:18:43.021: INFO: namespace: e2e-tests-projected-z4gzs, resource: bindings, ignored listing per whitelist
Sep  9 19:18:43.072: INFO: namespace e2e-tests-projected-z4gzs deletion completed in 6.094696668s

• [SLOW TEST:10.350 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:18:43.072: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Sep  9 19:18:43.195: INFO: Waiting up to 5m0s for pod "downwardapi-volume-466d73e0-f2d1-11ea-88c2-0242ac110007" in namespace "e2e-tests-downward-api-z2hpg" to be "success or failure"
Sep  9 19:18:43.211: INFO: Pod "downwardapi-volume-466d73e0-f2d1-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 15.985863ms
Sep  9 19:18:45.214: INFO: Pod "downwardapi-volume-466d73e0-f2d1-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019638821s
Sep  9 19:18:47.219: INFO: Pod "downwardapi-volume-466d73e0-f2d1-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023984708s
STEP: Saw pod success
Sep  9 19:18:47.219: INFO: Pod "downwardapi-volume-466d73e0-f2d1-11ea-88c2-0242ac110007" satisfied condition "success or failure"
Sep  9 19:18:47.222: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-466d73e0-f2d1-11ea-88c2-0242ac110007 container client-container: 
STEP: delete the pod
Sep  9 19:18:47.254: INFO: Waiting for pod downwardapi-volume-466d73e0-f2d1-11ea-88c2-0242ac110007 to disappear
Sep  9 19:18:47.258: INFO: Pod downwardapi-volume-466d73e0-f2d1-11ea-88c2-0242ac110007 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:18:47.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-z2hpg" for this suite.
Sep  9 19:18:53.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:18:53.318: INFO: namespace: e2e-tests-downward-api-z2hpg, resource: bindings, ignored listing per whitelist
Sep  9 19:18:53.352: INFO: namespace e2e-tests-downward-api-z2hpg deletion completed in 6.090633772s

• [SLOW TEST:10.280 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:18:53.352: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: executing a command with run --rm and attach with stdin
Sep  9 19:18:53.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-jdnzt run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Sep  9 19:18:58.997: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0909 19:18:58.925880    2677 log.go:172] (0xc000138790) (0xc0002a8140) Create stream\nI0909 19:18:58.925914    2677 log.go:172] (0xc000138790) (0xc0002a8140) Stream added, broadcasting: 1\nI0909 19:18:58.928980    2677 log.go:172] (0xc000138790) Reply frame received for 1\nI0909 19:18:58.929060    2677 log.go:172] (0xc000138790) (0xc0001f6000) Create stream\nI0909 19:18:58.929078    2677 log.go:172] (0xc000138790) (0xc0001f6000) Stream added, broadcasting: 3\nI0909 19:18:58.930154    2677 log.go:172] (0xc000138790) Reply frame received for 3\nI0909 19:18:58.930199    2677 log.go:172] (0xc000138790) (0xc0002a81e0) Create stream\nI0909 19:18:58.930214    2677 log.go:172] (0xc000138790) (0xc0002a81e0) Stream added, broadcasting: 5\nI0909 19:18:58.931244    2677 log.go:172] (0xc000138790) Reply frame received for 5\nI0909 19:18:58.931285    2677 log.go:172] (0xc000138790) (0xc0001f60a0) Create stream\nI0909 19:18:58.931297    2677 log.go:172] (0xc000138790) (0xc0001f60a0) Stream added, broadcasting: 7\nI0909 19:18:58.932350    2677 log.go:172] (0xc000138790) Reply frame received for 7\nI0909 19:18:58.932552    2677 log.go:172] (0xc0001f6000) (3) Writing data frame\nI0909 19:18:58.932662    2677 log.go:172] (0xc0001f6000) (3) Writing data frame\nI0909 19:18:58.933636    2677 log.go:172] (0xc000138790) Data frame received for 5\nI0909 19:18:58.933655    2677 log.go:172] (0xc0002a81e0) (5) Data frame handling\nI0909 19:18:58.933672    2677 log.go:172] (0xc0002a81e0) (5) Data frame sent\nI0909 19:18:58.934415    2677 log.go:172] (0xc000138790) Data frame received for 5\nI0909 19:18:58.934430    2677 log.go:172] (0xc0002a81e0) (5) Data frame handling\nI0909 19:18:58.934442    2677 log.go:172] (0xc0002a81e0) (5) Data frame sent\nI0909 19:18:58.970398    2677 log.go:172] (0xc000138790) Data frame received for 7\nI0909 19:18:58.970460    2677 log.go:172] (0xc0001f60a0) (7) Data frame handling\nI0909 19:18:58.970510    2677 log.go:172] (0xc000138790) Data frame received for 5\nI0909 19:18:58.970549    2677 log.go:172] (0xc0002a81e0) (5) Data frame handling\nI0909 19:18:58.970655    2677 log.go:172] (0xc000138790) Data frame received for 1\nI0909 19:18:58.970683    2677 log.go:172] (0xc0002a8140) (1) Data frame handling\nI0909 19:18:58.970706    2677 log.go:172] (0xc0002a8140) (1) Data frame sent\nI0909 19:18:58.970780    2677 log.go:172] (0xc000138790) (0xc0002a8140) Stream removed, broadcasting: 1\nI0909 19:18:58.970945    2677 log.go:172] (0xc000138790) (0xc0001f6000) Stream removed, broadcasting: 3\nI0909 19:18:58.971016    2677 log.go:172] (0xc000138790) Go away received\nI0909 19:18:58.971052    2677 log.go:172] (0xc000138790) (0xc0002a8140) Stream removed, broadcasting: 1\nI0909 19:18:58.971111    2677 log.go:172] (0xc000138790) (0xc0001f6000) Stream removed, broadcasting: 3\nI0909 19:18:58.971130    2677 log.go:172] (0xc000138790) (0xc0002a81e0) Stream removed, broadcasting: 5\nI0909 19:18:58.971143    2677 log.go:172] (0xc000138790) (0xc0001f60a0) Stream removed, broadcasting: 7\n"
Sep  9 19:18:58.997: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:19:01.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-jdnzt" for this suite.
Sep  9 19:19:11.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:19:11.053: INFO: namespace: e2e-tests-kubectl-jdnzt, resource: bindings, ignored listing per whitelist
Sep  9 19:19:11.098: INFO: namespace e2e-tests-kubectl-jdnzt deletion completed in 10.088668836s

• [SLOW TEST:17.746 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:19:11.098: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Sep  9 19:19:11.242: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:19:11.244: INFO: Number of nodes with available pods: 0
Sep  9 19:19:11.244: INFO: Node hunter-worker is running more than one daemon pod
Sep  9 19:19:12.282: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:19:12.285: INFO: Number of nodes with available pods: 0
Sep  9 19:19:12.285: INFO: Node hunter-worker is running more than one daemon pod
Sep  9 19:19:13.249: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:19:13.252: INFO: Number of nodes with available pods: 0
Sep  9 19:19:13.252: INFO: Node hunter-worker is running more than one daemon pod
Sep  9 19:19:14.275: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:19:14.279: INFO: Number of nodes with available pods: 0
Sep  9 19:19:14.279: INFO: Node hunter-worker is running more than one daemon pod
Sep  9 19:19:15.249: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:19:15.252: INFO: Number of nodes with available pods: 2
Sep  9 19:19:15.253: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Sep  9 19:19:15.266: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:19:15.268: INFO: Number of nodes with available pods: 1
Sep  9 19:19:15.268: INFO: Node hunter-worker2 is running more than one daemon pod
Sep  9 19:19:16.273: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:19:16.276: INFO: Number of nodes with available pods: 1
Sep  9 19:19:16.276: INFO: Node hunter-worker2 is running more than one daemon pod
Sep  9 19:19:17.273: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:19:17.276: INFO: Number of nodes with available pods: 1
Sep  9 19:19:17.276: INFO: Node hunter-worker2 is running more than one daemon pod
Sep  9 19:19:18.273: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:19:18.277: INFO: Number of nodes with available pods: 1
Sep  9 19:19:18.277: INFO: Node hunter-worker2 is running more than one daemon pod
Sep  9 19:19:19.273: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:19:19.276: INFO: Number of nodes with available pods: 1
Sep  9 19:19:19.276: INFO: Node hunter-worker2 is running more than one daemon pod
Sep  9 19:19:20.272: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:19:20.275: INFO: Number of nodes with available pods: 1
Sep  9 19:19:20.275: INFO: Node hunter-worker2 is running more than one daemon pod
Sep  9 19:19:21.273: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:19:21.276: INFO: Number of nodes with available pods: 1
Sep  9 19:19:21.276: INFO: Node hunter-worker2 is running more than one daemon pod
Sep  9 19:19:22.273: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:19:22.276: INFO: Number of nodes with available pods: 1
Sep  9 19:19:22.276: INFO: Node hunter-worker2 is running more than one daemon pod
Sep  9 19:19:23.273: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:19:23.276: INFO: Number of nodes with available pods: 1
Sep  9 19:19:23.276: INFO: Node hunter-worker2 is running more than one daemon pod
Sep  9 19:19:24.272: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:19:24.275: INFO: Number of nodes with available pods: 1
Sep  9 19:19:24.275: INFO: Node hunter-worker2 is running more than one daemon pod
Sep  9 19:19:25.272: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:19:25.275: INFO: Number of nodes with available pods: 1
Sep  9 19:19:25.275: INFO: Node hunter-worker2 is running more than one daemon pod
Sep  9 19:19:26.273: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:19:26.276: INFO: Number of nodes with available pods: 1
Sep  9 19:19:26.276: INFO: Node hunter-worker2 is running more than one daemon pod
Sep  9 19:19:27.272: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:19:27.274: INFO: Number of nodes with available pods: 1
Sep  9 19:19:27.274: INFO: Node hunter-worker2 is running more than one daemon pod
Sep  9 19:19:28.277: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:19:28.281: INFO: Number of nodes with available pods: 1
Sep  9 19:19:28.281: INFO: Node hunter-worker2 is running more than one daemon pod
Sep  9 19:19:29.273: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:19:29.275: INFO: Number of nodes with available pods: 1
Sep  9 19:19:29.275: INFO: Node hunter-worker2 is running more than one daemon pod
Sep  9 19:19:30.274: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:19:30.278: INFO: Number of nodes with available pods: 1
Sep  9 19:19:30.278: INFO: Node hunter-worker2 is running more than one daemon pod
Sep  9 19:19:31.273: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:19:31.276: INFO: Number of nodes with available pods: 1
Sep  9 19:19:31.276: INFO: Node hunter-worker2 is running more than one daemon pod
Sep  9 19:19:32.273: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:19:32.275: INFO: Number of nodes with available pods: 1
Sep  9 19:19:32.275: INFO: Node hunter-worker2 is running more than one daemon pod
Sep  9 19:19:33.272: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:19:33.275: INFO: Number of nodes with available pods: 2
Sep  9 19:19:33.275: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-2khms, will wait for the garbage collector to delete the pods
Sep  9 19:19:33.338: INFO: Deleting DaemonSet.extensions daemon-set took: 6.710907ms
Sep  9 19:19:33.438: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.26569ms
Sep  9 19:19:40.142: INFO: Number of nodes with available pods: 0
Sep  9 19:19:40.142: INFO: Number of running nodes: 0, number of available pods: 0
Sep  9 19:19:40.166: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-2khms/daemonsets","resourceVersion":"744913"},"items":null}

Sep  9 19:19:40.168: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-2khms/pods","resourceVersion":"744913"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:19:40.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-2khms" for this suite.
Sep  9 19:19:46.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:19:46.248: INFO: namespace: e2e-tests-daemonsets-2khms, resource: bindings, ignored listing per whitelist
Sep  9 19:19:46.275: INFO: namespace e2e-tests-daemonsets-2khms deletion completed in 6.093325731s

• [SLOW TEST:35.177 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:19:46.275: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-6c1ba9d1-f2d1-11ea-88c2-0242ac110007
STEP: Creating a pod to test consume configMaps
Sep  9 19:19:46.432: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6c1c3c39-f2d1-11ea-88c2-0242ac110007" in namespace "e2e-tests-projected-c7bfg" to be "success or failure"
Sep  9 19:19:46.445: INFO: Pod "pod-projected-configmaps-6c1c3c39-f2d1-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 13.446423ms
Sep  9 19:19:48.476: INFO: Pod "pod-projected-configmaps-6c1c3c39-f2d1-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043961103s
Sep  9 19:19:50.482: INFO: Pod "pod-projected-configmaps-6c1c3c39-f2d1-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050129997s
STEP: Saw pod success
Sep  9 19:19:50.482: INFO: Pod "pod-projected-configmaps-6c1c3c39-f2d1-11ea-88c2-0242ac110007" satisfied condition "success or failure"
Sep  9 19:19:50.485: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-6c1c3c39-f2d1-11ea-88c2-0242ac110007 container projected-configmap-volume-test: 
STEP: delete the pod
Sep  9 19:19:50.522: INFO: Waiting for pod pod-projected-configmaps-6c1c3c39-f2d1-11ea-88c2-0242ac110007 to disappear
Sep  9 19:19:50.535: INFO: Pod pod-projected-configmaps-6c1c3c39-f2d1-11ea-88c2-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:19:50.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-c7bfg" for this suite.
Sep  9 19:19:56.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:19:56.569: INFO: namespace: e2e-tests-projected-c7bfg, resource: bindings, ignored listing per whitelist
Sep  9 19:19:56.635: INFO: namespace e2e-tests-projected-c7bfg deletion completed in 6.095694353s

• [SLOW TEST:10.360 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:19:56.636: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Sep  9 19:19:56.734: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:20:00.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-mtjrk" for this suite.
Sep  9 19:20:38.826: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:20:38.843: INFO: namespace: e2e-tests-pods-mtjrk, resource: bindings, ignored listing per whitelist
Sep  9 19:20:38.919: INFO: namespace e2e-tests-pods-mtjrk deletion completed in 38.110982653s

• [SLOW TEST:42.283 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:20:38.919: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Sep  9 19:20:39.031: INFO: Creating deployment "test-recreate-deployment"
Sep  9 19:20:39.035: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Sep  9 19:20:39.047: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Sep  9 19:20:41.069: INFO: Waiting deployment "test-recreate-deployment" to complete
Sep  9 19:20:41.071: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735276039, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735276039, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735276039, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735276039, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  9 19:20:43.076: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Sep  9 19:20:43.083: INFO: Updating deployment test-recreate-deployment
Sep  9 19:20:43.083: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Sep  9 19:20:43.283: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-jsdkd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-jsdkd/deployments/test-recreate-deployment,UID:8b79d1a9-f2d1-11ea-b060-0242ac120006,ResourceVersion:745151,Generation:2,CreationTimestamp:2020-09-09 19:20:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-09-09 19:20:43 +0000 UTC 2020-09-09 19:20:43 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-09-09 19:20:43 +0000 UTC 2020-09-09 19:20:39 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Sep  9 19:20:43.297: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-jsdkd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-jsdkd/replicasets/test-recreate-deployment-589c4bfd,UID:8df1531e-f2d1-11ea-b060-0242ac120006,ResourceVersion:745150,Generation:1,CreationTimestamp:2020-09-09 19:20:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 8b79d1a9-f2d1-11ea-b060-0242ac120006 0xc000d430ff 0xc000d43140}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Sep  9 19:20:43.297: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Sep  9 19:20:43.298: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-jsdkd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-jsdkd/replicasets/test-recreate-deployment-5bf7f65dc,UID:8b7c2161-f2d1-11ea-b060-0242ac120006,ResourceVersion:745140,Generation:2,CreationTimestamp:2020-09-09 19:20:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 8b79d1a9-f2d1-11ea-b060-0242ac120006 0xc000d43200 0xc000d43201}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Sep  9 19:20:43.301: INFO: Pod "test-recreate-deployment-589c4bfd-2hs29" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-2hs29,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-jsdkd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-jsdkd/pods/test-recreate-deployment-589c4bfd-2hs29,UID:8df35dc2-f2d1-11ea-b060-0242ac120006,ResourceVersion:745152,Generation:0,CreationTimestamp:2020-09-09 19:20:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 8df1531e-f2d1-11ea-b060-0242ac120006 0xc000d43fff 0xc000966130}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-7mpd5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7mpd5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-7mpd5 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0009661a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0009661c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:20:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:20:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:20:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:20:43 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-09-09 19:20:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:20:43.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-jsdkd" for this suite.
Sep  9 19:20:49.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:20:49.575: INFO: namespace: e2e-tests-deployment-jsdkd, resource: bindings, ignored listing per whitelist
Sep  9 19:20:49.611: INFO: namespace e2e-tests-deployment-jsdkd deletion completed in 6.1774981s

• [SLOW TEST:10.692 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:20:49.611: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-91d79b75-f2d1-11ea-88c2-0242ac110007
STEP: Creating a pod to test consume configMaps
Sep  9 19:20:49.790: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-91e1d45c-f2d1-11ea-88c2-0242ac110007" in namespace "e2e-tests-projected-6bkvn" to be "success or failure"
Sep  9 19:20:49.800: INFO: Pod "pod-projected-configmaps-91e1d45c-f2d1-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.216086ms
Sep  9 19:20:51.804: INFO: Pod "pod-projected-configmaps-91e1d45c-f2d1-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014243895s
Sep  9 19:20:53.808: INFO: Pod "pod-projected-configmaps-91e1d45c-f2d1-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018616844s
STEP: Saw pod success
Sep  9 19:20:53.808: INFO: Pod "pod-projected-configmaps-91e1d45c-f2d1-11ea-88c2-0242ac110007" satisfied condition "success or failure"
Sep  9 19:20:53.811: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-91e1d45c-f2d1-11ea-88c2-0242ac110007 container projected-configmap-volume-test: 
STEP: delete the pod
Sep  9 19:20:53.831: INFO: Waiting for pod pod-projected-configmaps-91e1d45c-f2d1-11ea-88c2-0242ac110007 to disappear
Sep  9 19:20:53.851: INFO: Pod pod-projected-configmaps-91e1d45c-f2d1-11ea-88c2-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:20:53.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-6bkvn" for this suite.
Sep  9 19:20:59.870: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:20:59.908: INFO: namespace: e2e-tests-projected-6bkvn, resource: bindings, ignored listing per whitelist
Sep  9 19:20:59.943: INFO: namespace e2e-tests-projected-6bkvn deletion completed in 6.088122183s

• [SLOW TEST:10.331 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:20:59.943: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-qvfk
STEP: Creating a pod to test atomic-volume-subpath
Sep  9 19:21:00.080: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-qvfk" in namespace "e2e-tests-subpath-nsnzz" to be "success or failure"
Sep  9 19:21:00.087: INFO: Pod "pod-subpath-test-configmap-qvfk": Phase="Pending", Reason="", readiness=false. Elapsed: 7.83744ms
Sep  9 19:21:02.091: INFO: Pod "pod-subpath-test-configmap-qvfk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011459501s
Sep  9 19:21:04.100: INFO: Pod "pod-subpath-test-configmap-qvfk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020658098s
Sep  9 19:21:06.105: INFO: Pod "pod-subpath-test-configmap-qvfk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.024956811s
Sep  9 19:21:08.109: INFO: Pod "pod-subpath-test-configmap-qvfk": Phase="Running", Reason="", readiness=false. Elapsed: 8.029796815s
Sep  9 19:21:10.114: INFO: Pod "pod-subpath-test-configmap-qvfk": Phase="Running", Reason="", readiness=false. Elapsed: 10.034189425s
Sep  9 19:21:12.118: INFO: Pod "pod-subpath-test-configmap-qvfk": Phase="Running", Reason="", readiness=false. Elapsed: 12.038383887s
Sep  9 19:21:14.125: INFO: Pod "pod-subpath-test-configmap-qvfk": Phase="Running", Reason="", readiness=false. Elapsed: 14.045224894s
Sep  9 19:21:16.129: INFO: Pod "pod-subpath-test-configmap-qvfk": Phase="Running", Reason="", readiness=false. Elapsed: 16.049611068s
Sep  9 19:21:18.134: INFO: Pod "pod-subpath-test-configmap-qvfk": Phase="Running", Reason="", readiness=false. Elapsed: 18.054104364s
Sep  9 19:21:20.138: INFO: Pod "pod-subpath-test-configmap-qvfk": Phase="Running", Reason="", readiness=false. Elapsed: 20.058215457s
Sep  9 19:21:22.142: INFO: Pod "pod-subpath-test-configmap-qvfk": Phase="Running", Reason="", readiness=false. Elapsed: 22.062667485s
Sep  9 19:21:24.148: INFO: Pod "pod-subpath-test-configmap-qvfk": Phase="Running", Reason="", readiness=false. Elapsed: 24.068530729s
Sep  9 19:21:26.153: INFO: Pod "pod-subpath-test-configmap-qvfk": Phase="Running", Reason="", readiness=false. Elapsed: 26.073072719s
Sep  9 19:21:28.157: INFO: Pod "pod-subpath-test-configmap-qvfk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.077574112s
STEP: Saw pod success
Sep  9 19:21:28.157: INFO: Pod "pod-subpath-test-configmap-qvfk" satisfied condition "success or failure"
Sep  9 19:21:28.160: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-configmap-qvfk container test-container-subpath-configmap-qvfk: 
STEP: delete the pod
Sep  9 19:21:28.196: INFO: Waiting for pod pod-subpath-test-configmap-qvfk to disappear
Sep  9 19:21:28.207: INFO: Pod pod-subpath-test-configmap-qvfk no longer exists
STEP: Deleting pod pod-subpath-test-configmap-qvfk
Sep  9 19:21:28.207: INFO: Deleting pod "pod-subpath-test-configmap-qvfk" in namespace "e2e-tests-subpath-nsnzz"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:21:28.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-nsnzz" for this suite.
Sep  9 19:21:34.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:21:34.294: INFO: namespace: e2e-tests-subpath-nsnzz, resource: bindings, ignored listing per whitelist
Sep  9 19:21:34.297: INFO: namespace e2e-tests-subpath-nsnzz deletion completed in 6.082642669s

• [SLOW TEST:34.353 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:21:34.297: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Sep  9 19:21:34.384: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:21:42.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-4l9n9" for this suite.
Sep  9 19:22:04.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:22:04.362: INFO: namespace: e2e-tests-init-container-4l9n9, resource: bindings, ignored listing per whitelist
Sep  9 19:22:04.383: INFO: namespace e2e-tests-init-container-4l9n9 deletion completed in 22.112747839s

• [SLOW TEST:30.086 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:22:04.383: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
STEP: creating an rc
Sep  9 19:22:04.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-sxgkz'
Sep  9 19:22:04.768: INFO: stderr: ""
Sep  9 19:22:04.768: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Waiting for Redis master to start.
Sep  9 19:22:05.772: INFO: Selector matched 1 pods for map[app:redis]
Sep  9 19:22:05.772: INFO: Found 0 / 1
Sep  9 19:22:06.772: INFO: Selector matched 1 pods for map[app:redis]
Sep  9 19:22:06.772: INFO: Found 0 / 1
Sep  9 19:22:07.773: INFO: Selector matched 1 pods for map[app:redis]
Sep  9 19:22:07.773: INFO: Found 0 / 1
Sep  9 19:22:08.772: INFO: Selector matched 1 pods for map[app:redis]
Sep  9 19:22:08.772: INFO: Found 1 / 1
Sep  9 19:22:08.772: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Sep  9 19:22:08.776: INFO: Selector matched 1 pods for map[app:redis]
Sep  9 19:22:08.776: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Sep  9 19:22:08.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-tzcdh redis-master --namespace=e2e-tests-kubectl-sxgkz'
Sep  9 19:22:08.911: INFO: stderr: ""
Sep  9 19:22:08.911: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 09 Sep 19:22:07.482 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 09 Sep 19:22:07.482 # Server started, Redis version 3.2.12\n1:M 09 Sep 19:22:07.482 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 09 Sep 19:22:07.482 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Sep  9 19:22:08.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-tzcdh redis-master --namespace=e2e-tests-kubectl-sxgkz --tail=1'
Sep  9 19:22:09.025: INFO: stderr: ""
Sep  9 19:22:09.025: INFO: stdout: "1:M 09 Sep 19:22:07.482 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Sep  9 19:22:09.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-tzcdh redis-master --namespace=e2e-tests-kubectl-sxgkz --limit-bytes=1'
Sep  9 19:22:09.137: INFO: stderr: ""
Sep  9 19:22:09.137: INFO: stdout: " "
STEP: exposing timestamps
Sep  9 19:22:09.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-tzcdh redis-master --namespace=e2e-tests-kubectl-sxgkz --tail=1 --timestamps'
Sep  9 19:22:09.236: INFO: stderr: ""
Sep  9 19:22:09.236: INFO: stdout: "2020-09-09T19:22:07.482664677Z 1:M 09 Sep 19:22:07.482 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Sep  9 19:22:11.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-tzcdh redis-master --namespace=e2e-tests-kubectl-sxgkz --since=1s'
Sep  9 19:22:11.846: INFO: stderr: ""
Sep  9 19:22:11.846: INFO: stdout: ""
Sep  9 19:22:11.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-tzcdh redis-master --namespace=e2e-tests-kubectl-sxgkz --since=24h'
Sep  9 19:22:11.962: INFO: stderr: ""
Sep  9 19:22:11.962: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 09 Sep 19:22:07.482 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 09 Sep 19:22:07.482 # Server started, Redis version 3.2.12\n1:M 09 Sep 19:22:07.482 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 09 Sep 19:22:07.482 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140
STEP: using delete to clean up resources
Sep  9 19:22:11.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-sxgkz'
Sep  9 19:22:12.076: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Sep  9 19:22:12.076: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Sep  9 19:22:12.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-sxgkz'
Sep  9 19:22:12.183: INFO: stderr: "No resources found.\n"
Sep  9 19:22:12.183: INFO: stdout: ""
Sep  9 19:22:12.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-sxgkz -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Sep  9 19:22:12.289: INFO: stderr: ""
Sep  9 19:22:12.289: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:22:12.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-sxgkz" for this suite.
Sep  9 19:22:18.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:22:18.531: INFO: namespace: e2e-tests-kubectl-sxgkz, resource: bindings, ignored listing per whitelist
Sep  9 19:22:18.559: INFO: namespace e2e-tests-kubectl-sxgkz deletion completed in 6.266032108s

• [SLOW TEST:14.175 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:22:18.559: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Sep  9 19:22:18.705: INFO: Waiting up to 5m0s for pod "downward-api-c6db371e-f2d1-11ea-88c2-0242ac110007" in namespace "e2e-tests-downward-api-9kjrf" to be "success or failure"
Sep  9 19:22:18.712: INFO: Pod "downward-api-c6db371e-f2d1-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.607737ms
Sep  9 19:22:20.728: INFO: Pod "downward-api-c6db371e-f2d1-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023251704s
Sep  9 19:22:22.767: INFO: Pod "downward-api-c6db371e-f2d1-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061973439s
STEP: Saw pod success
Sep  9 19:22:22.767: INFO: Pod "downward-api-c6db371e-f2d1-11ea-88c2-0242ac110007" satisfied condition "success or failure"
Sep  9 19:22:22.770: INFO: Trying to get logs from node hunter-worker2 pod downward-api-c6db371e-f2d1-11ea-88c2-0242ac110007 container dapi-container: 
STEP: delete the pod
Sep  9 19:22:22.807: INFO: Waiting for pod downward-api-c6db371e-f2d1-11ea-88c2-0242ac110007 to disappear
Sep  9 19:22:22.826: INFO: Pod downward-api-c6db371e-f2d1-11ea-88c2-0242ac110007 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:22:22.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-9kjrf" for this suite.
Sep  9 19:22:28.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:22:28.850: INFO: namespace: e2e-tests-downward-api-9kjrf, resource: bindings, ignored listing per whitelist
Sep  9 19:22:28.922: INFO: namespace e2e-tests-downward-api-9kjrf deletion completed in 6.091770293s

• [SLOW TEST:10.363 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:22:28.922: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-cd08100c-f2d1-11ea-88c2-0242ac110007
STEP: Creating a pod to test consume configMaps
Sep  9 19:22:29.050: INFO: Waiting up to 5m0s for pod "pod-configmaps-cd0c6bc8-f2d1-11ea-88c2-0242ac110007" in namespace "e2e-tests-configmap-7dcwc" to be "success or failure"
Sep  9 19:22:29.090: INFO: Pod "pod-configmaps-cd0c6bc8-f2d1-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 39.733608ms
Sep  9 19:22:31.189: INFO: Pod "pod-configmaps-cd0c6bc8-f2d1-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.138885691s
Sep  9 19:22:33.193: INFO: Pod "pod-configmaps-cd0c6bc8-f2d1-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.143054945s
STEP: Saw pod success
Sep  9 19:22:33.193: INFO: Pod "pod-configmaps-cd0c6bc8-f2d1-11ea-88c2-0242ac110007" satisfied condition "success or failure"
Sep  9 19:22:33.196: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-cd0c6bc8-f2d1-11ea-88c2-0242ac110007 container configmap-volume-test: 
STEP: delete the pod
Sep  9 19:22:33.235: INFO: Waiting for pod pod-configmaps-cd0c6bc8-f2d1-11ea-88c2-0242ac110007 to disappear
Sep  9 19:22:33.246: INFO: Pod pod-configmaps-cd0c6bc8-f2d1-11ea-88c2-0242ac110007 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:22:33.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-7dcwc" for this suite.
Sep  9 19:22:39.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:22:39.306: INFO: namespace: e2e-tests-configmap-7dcwc, resource: bindings, ignored listing per whitelist
Sep  9 19:22:39.350: INFO: namespace e2e-tests-configmap-7dcwc deletion completed in 6.100093694s

• [SLOW TEST:10.428 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:22:39.350: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Sep  9 19:22:39.457: INFO: Waiting up to 5m0s for pod "pod-d33ed056-f2d1-11ea-88c2-0242ac110007" in namespace "e2e-tests-emptydir-zwbnm" to be "success or failure"
Sep  9 19:22:39.468: INFO: Pod "pod-d33ed056-f2d1-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.360475ms
Sep  9 19:22:41.471: INFO: Pod "pod-d33ed056-f2d1-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014049678s
Sep  9 19:22:43.475: INFO: Pod "pod-d33ed056-f2d1-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01748018s
STEP: Saw pod success
Sep  9 19:22:43.475: INFO: Pod "pod-d33ed056-f2d1-11ea-88c2-0242ac110007" satisfied condition "success or failure"
Sep  9 19:22:43.477: INFO: Trying to get logs from node hunter-worker2 pod pod-d33ed056-f2d1-11ea-88c2-0242ac110007 container test-container: 
STEP: delete the pod
Sep  9 19:22:43.509: INFO: Waiting for pod pod-d33ed056-f2d1-11ea-88c2-0242ac110007 to disappear
Sep  9 19:22:43.526: INFO: Pod pod-d33ed056-f2d1-11ea-88c2-0242ac110007 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:22:43.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-zwbnm" for this suite.
Sep  9 19:22:49.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:22:49.555: INFO: namespace: e2e-tests-emptydir-zwbnm, resource: bindings, ignored listing per whitelist
Sep  9 19:22:49.608: INFO: namespace e2e-tests-emptydir-zwbnm deletion completed in 6.078533037s

• [SLOW TEST:10.258 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:22:49.608: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
Sep  9 19:22:50.219: INFO: created pod pod-service-account-defaultsa
Sep  9 19:22:50.219: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Sep  9 19:22:50.246: INFO: created pod pod-service-account-mountsa
Sep  9 19:22:50.246: INFO: pod pod-service-account-mountsa service account token volume mount: true
Sep  9 19:22:50.275: INFO: created pod pod-service-account-nomountsa
Sep  9 19:22:50.275: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Sep  9 19:22:50.336: INFO: created pod pod-service-account-defaultsa-mountspec
Sep  9 19:22:50.337: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Sep  9 19:22:50.373: INFO: created pod pod-service-account-mountsa-mountspec
Sep  9 19:22:50.373: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Sep  9 19:22:50.444: INFO: created pod pod-service-account-nomountsa-mountspec
Sep  9 19:22:50.444: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Sep  9 19:22:50.480: INFO: created pod pod-service-account-defaultsa-nomountspec
Sep  9 19:22:50.480: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Sep  9 19:22:50.513: INFO: created pod pod-service-account-mountsa-nomountspec
Sep  9 19:22:50.513: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Sep  9 19:22:50.572: INFO: created pod pod-service-account-nomountsa-nomountspec
Sep  9 19:22:50.572: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:22:50.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-8btcw" for this suite.
Sep  9 19:23:20.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:23:20.806: INFO: namespace: e2e-tests-svcaccounts-8btcw, resource: bindings, ignored listing per whitelist
Sep  9 19:23:20.828: INFO: namespace e2e-tests-svcaccounts-8btcw deletion completed in 30.205244246s

• [SLOW TEST:31.220 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:23:20.828: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-zjdmw
Sep  9 19:23:24.941: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-zjdmw
STEP: checking the pod's current state and verifying that restartCount is present
Sep  9 19:23:24.943: INFO: Initial restart count of pod liveness-exec is 0
Sep  9 19:24:13.054: INFO: Restart count of pod e2e-tests-container-probe-zjdmw/liveness-exec is now 1 (48.111200896s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:24:13.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-zjdmw" for this suite.
Sep  9 19:24:19.118: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:24:19.167: INFO: namespace: e2e-tests-container-probe-zjdmw, resource: bindings, ignored listing per whitelist
Sep  9 19:24:19.196: INFO: namespace e2e-tests-container-probe-zjdmw deletion completed in 6.091298161s

• [SLOW TEST:58.367 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:24:19.196: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:24:25.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-gs8hl" for this suite.
Sep  9 19:24:31.533: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:24:31.599: INFO: namespace: e2e-tests-namespaces-gs8hl, resource: bindings, ignored listing per whitelist
Sep  9 19:24:31.605: INFO: namespace e2e-tests-namespaces-gs8hl deletion completed in 6.084530739s
STEP: Destroying namespace "e2e-tests-nsdeletetest-chndr" for this suite.
Sep  9 19:24:31.607: INFO: Namespace e2e-tests-nsdeletetest-chndr was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-kjv9q" for this suite.
Sep  9 19:24:37.622: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:24:37.693: INFO: namespace: e2e-tests-nsdeletetest-kjv9q, resource: bindings, ignored listing per whitelist
Sep  9 19:24:37.702: INFO: namespace e2e-tests-nsdeletetest-kjv9q deletion completed in 6.095173834s

• [SLOW TEST:18.507 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:24:37.703: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Sep  9 19:24:37.795: INFO: namespace e2e-tests-kubectl-n9h26
Sep  9 19:24:37.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-n9h26'
Sep  9 19:24:38.070: INFO: stderr: ""
Sep  9 19:24:38.070: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Sep  9 19:24:39.106: INFO: Selector matched 1 pods for map[app:redis]
Sep  9 19:24:39.106: INFO: Found 0 / 1
Sep  9 19:24:40.074: INFO: Selector matched 1 pods for map[app:redis]
Sep  9 19:24:40.074: INFO: Found 0 / 1
Sep  9 19:24:41.073: INFO: Selector matched 1 pods for map[app:redis]
Sep  9 19:24:41.073: INFO: Found 1 / 1
Sep  9 19:24:41.073: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Sep  9 19:24:41.077: INFO: Selector matched 1 pods for map[app:redis]
Sep  9 19:24:41.077: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Sep  9 19:24:41.077: INFO: wait on redis-master startup in e2e-tests-kubectl-n9h26 
Sep  9 19:24:41.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-t8ww7 redis-master --namespace=e2e-tests-kubectl-n9h26'
Sep  9 19:24:41.183: INFO: stderr: ""
Sep  9 19:24:41.183: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 09 Sep 19:24:40.688 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 09 Sep 19:24:40.688 # Server started, Redis version 3.2.12\n1:M 09 Sep 19:24:40.688 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 09 Sep 19:24:40.688 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Sep  9 19:24:41.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-n9h26'
Sep  9 19:24:41.322: INFO: stderr: ""
Sep  9 19:24:41.322: INFO: stdout: "service/rm2 exposed\n"
Sep  9 19:24:41.327: INFO: Service rm2 in namespace e2e-tests-kubectl-n9h26 found.
STEP: exposing service
Sep  9 19:24:43.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-n9h26'
Sep  9 19:24:43.473: INFO: stderr: ""
Sep  9 19:24:43.473: INFO: stdout: "service/rm3 exposed\n"
Sep  9 19:24:43.482: INFO: Service rm3 in namespace e2e-tests-kubectl-n9h26 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:24:45.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-n9h26" for this suite.
Sep  9 19:25:09.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:25:09.576: INFO: namespace: e2e-tests-kubectl-n9h26, resource: bindings, ignored listing per whitelist
Sep  9 19:25:09.594: INFO: namespace e2e-tests-kubectl-n9h26 deletion completed in 24.101611717s

• [SLOW TEST:31.892 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:25:09.595: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Sep  9 19:25:09.750: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Sep  9 19:25:09.757: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-nnrnf/daemonsets","resourceVersion":"746117"},"items":null}

Sep  9 19:25:09.759: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-nnrnf/pods","resourceVersion":"746117"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:25:09.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-nnrnf" for this suite.
Sep  9 19:25:15.793: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:25:15.844: INFO: namespace: e2e-tests-daemonsets-nnrnf, resource: bindings, ignored listing per whitelist
Sep  9 19:25:15.879: INFO: namespace e2e-tests-daemonsets-nnrnf deletion completed in 6.106905641s

S [SKIPPING] [6.284 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Sep  9 19:25:09.751: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:25:15.879: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:25:16.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-4kwnf" for this suite.
Sep  9 19:25:22.026: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:25:22.063: INFO: namespace: e2e-tests-services-4kwnf, resource: bindings, ignored listing per whitelist
Sep  9 19:25:22.109: INFO: namespace e2e-tests-services-4kwnf deletion completed in 6.099891845s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:6.230 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:25:22.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:25:26.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-686lr" for this suite.
Sep  9 19:25:32.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:25:32.289: INFO: namespace: e2e-tests-kubelet-test-686lr, resource: bindings, ignored listing per whitelist
Sep  9 19:25:32.304: INFO: namespace e2e-tests-kubelet-test-686lr deletion completed in 6.099352308s

• [SLOW TEST:10.195 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:25:32.305: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Sep  9 19:25:32.459: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Sep  9 19:25:32.466: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:25:32.468: INFO: Number of nodes with available pods: 0
Sep  9 19:25:32.468: INFO: Node hunter-worker is running more than one daemon pod
Sep  9 19:25:33.473: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:25:33.477: INFO: Number of nodes with available pods: 0
Sep  9 19:25:33.477: INFO: Node hunter-worker is running more than one daemon pod
Sep  9 19:25:34.473: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:25:34.477: INFO: Number of nodes with available pods: 0
Sep  9 19:25:34.477: INFO: Node hunter-worker is running more than one daemon pod
Sep  9 19:25:35.560: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:25:35.599: INFO: Number of nodes with available pods: 0
Sep  9 19:25:35.599: INFO: Node hunter-worker is running more than one daemon pod
Sep  9 19:25:36.479: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:25:36.486: INFO: Number of nodes with available pods: 2
Sep  9 19:25:36.486: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Sep  9 19:25:36.538: INFO: Wrong image for pod: daemon-set-2dxxv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 19:25:36.539: INFO: Wrong image for pod: daemon-set-z6vkp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 19:25:36.562: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:25:37.566: INFO: Wrong image for pod: daemon-set-2dxxv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 19:25:37.566: INFO: Wrong image for pod: daemon-set-z6vkp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 19:25:37.570: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:25:38.566: INFO: Wrong image for pod: daemon-set-2dxxv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 19:25:38.566: INFO: Wrong image for pod: daemon-set-z6vkp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 19:25:38.570: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:25:39.566: INFO: Wrong image for pod: daemon-set-2dxxv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 19:25:39.566: INFO: Wrong image for pod: daemon-set-z6vkp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 19:25:39.566: INFO: Pod daemon-set-z6vkp is not available
Sep  9 19:25:39.570: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:25:40.566: INFO: Wrong image for pod: daemon-set-2dxxv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 19:25:40.566: INFO: Wrong image for pod: daemon-set-z6vkp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 19:25:40.566: INFO: Pod daemon-set-z6vkp is not available
Sep  9 19:25:40.571: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:25:41.566: INFO: Wrong image for pod: daemon-set-2dxxv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 19:25:41.566: INFO: Wrong image for pod: daemon-set-z6vkp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 19:25:41.566: INFO: Pod daemon-set-z6vkp is not available
Sep  9 19:25:41.570: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:25:42.566: INFO: Wrong image for pod: daemon-set-2dxxv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 19:25:42.566: INFO: Wrong image for pod: daemon-set-z6vkp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 19:25:42.566: INFO: Pod daemon-set-z6vkp is not available
Sep  9 19:25:42.570: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:25:43.566: INFO: Wrong image for pod: daemon-set-2dxxv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 19:25:43.566: INFO: Wrong image for pod: daemon-set-z6vkp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 19:25:43.566: INFO: Pod daemon-set-z6vkp is not available
Sep  9 19:25:43.569: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:25:44.566: INFO: Wrong image for pod: daemon-set-2dxxv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 19:25:44.566: INFO: Wrong image for pod: daemon-set-z6vkp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 19:25:44.566: INFO: Pod daemon-set-z6vkp is not available
Sep  9 19:25:44.571: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:25:45.566: INFO: Wrong image for pod: daemon-set-2dxxv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 19:25:45.566: INFO: Wrong image for pod: daemon-set-z6vkp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 19:25:45.566: INFO: Pod daemon-set-z6vkp is not available
Sep  9 19:25:45.570: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:25:46.566: INFO: Wrong image for pod: daemon-set-2dxxv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 19:25:46.566: INFO: Wrong image for pod: daemon-set-z6vkp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 19:25:46.566: INFO: Pod daemon-set-z6vkp is not available
Sep  9 19:25:46.571: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:25:47.566: INFO: Wrong image for pod: daemon-set-2dxxv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 19:25:47.566: INFO: Wrong image for pod: daemon-set-z6vkp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 19:25:47.566: INFO: Pod daemon-set-z6vkp is not available
Sep  9 19:25:47.570: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:25:48.566: INFO: Wrong image for pod: daemon-set-2dxxv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 19:25:48.566: INFO: Wrong image for pod: daemon-set-z6vkp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 19:25:48.566: INFO: Pod daemon-set-z6vkp is not available
Sep  9 19:25:48.570: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:25:49.580: INFO: Wrong image for pod: daemon-set-2dxxv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 19:25:49.580: INFO: Pod daemon-set-42vl9 is not available
Sep  9 19:25:49.583: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:25:50.565: INFO: Wrong image for pod: daemon-set-2dxxv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 19:25:50.566: INFO: Pod daemon-set-42vl9 is not available
Sep  9 19:25:50.570: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:25:51.690: INFO: Wrong image for pod: daemon-set-2dxxv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 19:25:51.690: INFO: Pod daemon-set-42vl9 is not available
Sep  9 19:25:51.694: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:25:52.566: INFO: Wrong image for pod: daemon-set-2dxxv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 19:25:52.566: INFO: Pod daemon-set-42vl9 is not available
Sep  9 19:25:52.571: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:25:53.566: INFO: Wrong image for pod: daemon-set-2dxxv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 19:25:53.570: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:25:54.566: INFO: Wrong image for pod: daemon-set-2dxxv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 19:25:54.571: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:25:55.566: INFO: Wrong image for pod: daemon-set-2dxxv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep  9 19:25:55.566: INFO: Pod daemon-set-2dxxv is not available
Sep  9 19:25:55.569: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:25:56.566: INFO: Pod daemon-set-czdbt is not available
Sep  9 19:25:56.571: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
Sep  9 19:25:56.574: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:25:56.578: INFO: Number of nodes with available pods: 1
Sep  9 19:25:56.578: INFO: Node hunter-worker2 is running more than one daemon pod
Sep  9 19:25:57.582: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:25:57.586: INFO: Number of nodes with available pods: 1
Sep  9 19:25:57.586: INFO: Node hunter-worker2 is running more than one daemon pod
Sep  9 19:25:58.587: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:25:58.591: INFO: Number of nodes with available pods: 1
Sep  9 19:25:58.591: INFO: Node hunter-worker2 is running more than one daemon pod
Sep  9 19:25:59.583: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep  9 19:25:59.586: INFO: Number of nodes with available pods: 2
Sep  9 19:25:59.586: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-bwpwd, will wait for the garbage collector to delete the pods
Sep  9 19:25:59.662: INFO: Deleting DaemonSet.extensions daemon-set took: 6.078492ms
Sep  9 19:25:59.762: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.287989ms
Sep  9 19:26:10.165: INFO: Number of nodes with available pods: 0
Sep  9 19:26:10.165: INFO: Number of running nodes: 0, number of available pods: 0
Sep  9 19:26:10.167: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-bwpwd/daemonsets","resourceVersion":"746349"},"items":null}

Sep  9 19:26:10.170: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-bwpwd/pods","resourceVersion":"746349"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:26:10.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-bwpwd" for this suite.
Sep  9 19:26:16.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:26:16.210: INFO: namespace: e2e-tests-daemonsets-bwpwd, resource: bindings, ignored listing per whitelist
Sep  9 19:26:16.273: INFO: namespace e2e-tests-daemonsets-bwpwd deletion completed in 6.089204617s

• [SLOW TEST:43.968 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:26:16.273: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Sep  9 19:26:20.898: INFO: Successfully updated pod "annotationupdate548a59c1-f2d2-11ea-88c2-0242ac110007"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:26:22.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-pvgkz" for this suite.
Sep  9 19:26:44.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:26:45.025: INFO: namespace: e2e-tests-projected-pvgkz, resource: bindings, ignored listing per whitelist
Sep  9 19:26:45.087: INFO: namespace e2e-tests-projected-pvgkz deletion completed in 22.111968806s

• [SLOW TEST:28.814 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:26:45.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Sep  9 19:26:45.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Sep  9 19:26:45.397: INFO: stderr: ""
Sep  9 19:26:45.397: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-09-07T10:49:09Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T02:50:51Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:26:45.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-2kmpz" for this suite.
Sep  9 19:26:51.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:26:51.496: INFO: namespace: e2e-tests-kubectl-2kmpz, resource: bindings, ignored listing per whitelist
Sep  9 19:26:51.499: INFO: namespace e2e-tests-kubectl-2kmpz deletion completed in 6.097453554s

• [SLOW TEST:6.412 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:26:51.499: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-698e3824-f2d2-11ea-88c2-0242ac110007
STEP: Creating a pod to test consume secrets
Sep  9 19:26:51.659: INFO: Waiting up to 5m0s for pod "pod-secrets-6991b452-f2d2-11ea-88c2-0242ac110007" in namespace "e2e-tests-secrets-nz8rc" to be "success or failure"
Sep  9 19:26:51.679: INFO: Pod "pod-secrets-6991b452-f2d2-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 19.050404ms
Sep  9 19:26:53.682: INFO: Pod "pod-secrets-6991b452-f2d2-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02287655s
Sep  9 19:26:55.733: INFO: Pod "pod-secrets-6991b452-f2d2-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.072932148s
STEP: Saw pod success
Sep  9 19:26:55.733: INFO: Pod "pod-secrets-6991b452-f2d2-11ea-88c2-0242ac110007" satisfied condition "success or failure"
Sep  9 19:26:55.735: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-6991b452-f2d2-11ea-88c2-0242ac110007 container secret-volume-test: 
STEP: delete the pod
Sep  9 19:26:55.770: INFO: Waiting for pod pod-secrets-6991b452-f2d2-11ea-88c2-0242ac110007 to disappear
Sep  9 19:26:55.783: INFO: Pod pod-secrets-6991b452-f2d2-11ea-88c2-0242ac110007 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:26:55.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-nz8rc" for this suite.
Sep  9 19:27:01.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:27:01.830: INFO: namespace: e2e-tests-secrets-nz8rc, resource: bindings, ignored listing per whitelist
Sep  9 19:27:01.878: INFO: namespace e2e-tests-secrets-nz8rc deletion completed in 6.090187633s

• [SLOW TEST:10.378 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:27:01.878: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Sep  9 19:27:01.950: INFO: PodSpec: initContainers in spec.initContainers
Sep  9 19:27:50.620: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-6fb66fa6-f2d2-11ea-88c2-0242ac110007", GenerateName:"", Namespace:"e2e-tests-init-container-flbn8", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-flbn8/pods/pod-init-6fb66fa6-f2d2-11ea-88c2-0242ac110007", UID:"6fb8daea-f2d2-11ea-b060-0242ac120006", ResourceVersion:"746663", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63735276421, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"950255494"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-v7grj", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001b06f40), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-v7grj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-v7grj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}, "cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-v7grj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001457638), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001bcb020), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0014576c0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0014576e0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0014576e8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0014576ec)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735276422, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735276422, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735276422, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735276421, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.5", PodIP:"10.244.1.136", StartTime:(*v1.Time)(0xc0013a54e0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0002cb030)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0002cb0a0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://89a6a54aaf603dc3065c9bb047892d1cc180747ff62dd9335f789e9d60986186"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0013a5520), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0013a5500), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:27:50.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-flbn8" for this suite.
Sep  9 19:28:12.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:28:12.732: INFO: namespace: e2e-tests-init-container-flbn8, resource: bindings, ignored listing per whitelist
Sep  9 19:28:12.781: INFO: namespace e2e-tests-init-container-flbn8 deletion completed in 22.142936534s

• [SLOW TEST:70.904 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:28:12.782: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-9a01f4eb-f2d2-11ea-88c2-0242ac110007
STEP: Creating secret with name s-test-opt-upd-9a01f594-f2d2-11ea-88c2-0242ac110007
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-9a01f4eb-f2d2-11ea-88c2-0242ac110007
STEP: Updating secret s-test-opt-upd-9a01f594-f2d2-11ea-88c2-0242ac110007
STEP: Creating secret with name s-test-opt-create-9a01f5e3-f2d2-11ea-88c2-0242ac110007
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:29:25.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-8bg4r" for this suite.
Sep  9 19:29:47.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:29:47.354: INFO: namespace: e2e-tests-secrets-8bg4r, resource: bindings, ignored listing per whitelist
Sep  9 19:29:47.381: INFO: namespace e2e-tests-secrets-8bg4r deletion completed in 22.078340001s

• [SLOW TEST:94.600 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:29:47.382: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-mbdfs/configmap-test-d25e2412-f2d2-11ea-88c2-0242ac110007
STEP: Creating a pod to test consume configMaps
Sep  9 19:29:47.482: INFO: Waiting up to 5m0s for pod "pod-configmaps-d25f7c96-f2d2-11ea-88c2-0242ac110007" in namespace "e2e-tests-configmap-mbdfs" to be "success or failure"
Sep  9 19:29:47.501: INFO: Pod "pod-configmaps-d25f7c96-f2d2-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 19.28108ms
Sep  9 19:29:49.604: INFO: Pod "pod-configmaps-d25f7c96-f2d2-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121932137s
Sep  9 19:29:51.609: INFO: Pod "pod-configmaps-d25f7c96-f2d2-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.126639676s
STEP: Saw pod success
Sep  9 19:29:51.609: INFO: Pod "pod-configmaps-d25f7c96-f2d2-11ea-88c2-0242ac110007" satisfied condition "success or failure"
Sep  9 19:29:51.612: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-d25f7c96-f2d2-11ea-88c2-0242ac110007 container env-test: 
STEP: delete the pod
Sep  9 19:29:51.633: INFO: Waiting for pod pod-configmaps-d25f7c96-f2d2-11ea-88c2-0242ac110007 to disappear
Sep  9 19:29:51.648: INFO: Pod pod-configmaps-d25f7c96-f2d2-11ea-88c2-0242ac110007 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:29:51.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-mbdfs" for this suite.
Sep  9 19:29:57.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:29:57.733: INFO: namespace: e2e-tests-configmap-mbdfs, resource: bindings, ignored listing per whitelist
Sep  9 19:29:57.741: INFO: namespace e2e-tests-configmap-mbdfs deletion completed in 6.089675904s

• [SLOW TEST:10.359 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:29:57.741: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on tmpfs
Sep  9 19:29:57.873: INFO: Waiting up to 5m0s for pod "pod-d88a9973-f2d2-11ea-88c2-0242ac110007" in namespace "e2e-tests-emptydir-h9hsz" to be "success or failure"
Sep  9 19:29:57.882: INFO: Pod "pod-d88a9973-f2d2-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.229054ms
Sep  9 19:29:59.886: INFO: Pod "pod-d88a9973-f2d2-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012779894s
Sep  9 19:30:01.890: INFO: Pod "pod-d88a9973-f2d2-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016818512s
STEP: Saw pod success
Sep  9 19:30:01.890: INFO: Pod "pod-d88a9973-f2d2-11ea-88c2-0242ac110007" satisfied condition "success or failure"
Sep  9 19:30:01.893: INFO: Trying to get logs from node hunter-worker2 pod pod-d88a9973-f2d2-11ea-88c2-0242ac110007 container test-container: 
STEP: delete the pod
Sep  9 19:30:01.946: INFO: Waiting for pod pod-d88a9973-f2d2-11ea-88c2-0242ac110007 to disappear
Sep  9 19:30:01.979: INFO: Pod pod-d88a9973-f2d2-11ea-88c2-0242ac110007 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:30:01.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-h9hsz" for this suite.
Sep  9 19:30:07.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:30:08.076: INFO: namespace: e2e-tests-emptydir-h9hsz, resource: bindings, ignored listing per whitelist
Sep  9 19:30:08.103: INFO: namespace e2e-tests-emptydir-h9hsz deletion completed in 6.120546948s

• [SLOW TEST:10.362 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:30:08.103: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-debadcaf-f2d2-11ea-88c2-0242ac110007
STEP: Creating a pod to test consume configMaps
Sep  9 19:30:08.259: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-dec23d00-f2d2-11ea-88c2-0242ac110007" in namespace "e2e-tests-projected-wt6sl" to be "success or failure"
Sep  9 19:30:08.270: INFO: Pod "pod-projected-configmaps-dec23d00-f2d2-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.681632ms
Sep  9 19:30:10.275: INFO: Pod "pod-projected-configmaps-dec23d00-f2d2-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015297712s
Sep  9 19:30:12.279: INFO: Pod "pod-projected-configmaps-dec23d00-f2d2-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019402154s
STEP: Saw pod success
Sep  9 19:30:12.279: INFO: Pod "pod-projected-configmaps-dec23d00-f2d2-11ea-88c2-0242ac110007" satisfied condition "success or failure"
Sep  9 19:30:12.281: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-dec23d00-f2d2-11ea-88c2-0242ac110007 container projected-configmap-volume-test: 
STEP: delete the pod
Sep  9 19:30:12.320: INFO: Waiting for pod pod-projected-configmaps-dec23d00-f2d2-11ea-88c2-0242ac110007 to disappear
Sep  9 19:30:12.398: INFO: Pod pod-projected-configmaps-dec23d00-f2d2-11ea-88c2-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:30:12.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-wt6sl" for this suite.
Sep  9 19:30:18.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:30:18.441: INFO: namespace: e2e-tests-projected-wt6sl, resource: bindings, ignored listing per whitelist
Sep  9 19:30:18.498: INFO: namespace e2e-tests-projected-wt6sl deletion completed in 6.09612006s

• [SLOW TEST:10.395 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:30:18.499: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Sep  9 19:30:18.651: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-trbbp,SelfLink:/api/v1/namespaces/e2e-tests-watch-trbbp/configmaps/e2e-watch-test-label-changed,UID:e4f0b234-f2d2-11ea-b060-0242ac120006,ResourceVersion:747083,Generation:0,CreationTimestamp:2020-09-09 19:30:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Sep  9 19:30:18.651: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-trbbp,SelfLink:/api/v1/namespaces/e2e-tests-watch-trbbp/configmaps/e2e-watch-test-label-changed,UID:e4f0b234-f2d2-11ea-b060-0242ac120006,ResourceVersion:747084,Generation:0,CreationTimestamp:2020-09-09 19:30:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Sep  9 19:30:18.652: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-trbbp,SelfLink:/api/v1/namespaces/e2e-tests-watch-trbbp/configmaps/e2e-watch-test-label-changed,UID:e4f0b234-f2d2-11ea-b060-0242ac120006,ResourceVersion:747085,Generation:0,CreationTimestamp:2020-09-09 19:30:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Sep  9 19:30:28.711: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-trbbp,SelfLink:/api/v1/namespaces/e2e-tests-watch-trbbp/configmaps/e2e-watch-test-label-changed,UID:e4f0b234-f2d2-11ea-b060-0242ac120006,ResourceVersion:747106,Generation:0,CreationTimestamp:2020-09-09 19:30:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Sep  9 19:30:28.711: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-trbbp,SelfLink:/api/v1/namespaces/e2e-tests-watch-trbbp/configmaps/e2e-watch-test-label-changed,UID:e4f0b234-f2d2-11ea-b060-0242ac120006,ResourceVersion:747107,Generation:0,CreationTimestamp:2020-09-09 19:30:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Sep  9 19:30:28.712: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-trbbp,SelfLink:/api/v1/namespaces/e2e-tests-watch-trbbp/configmaps/e2e-watch-test-label-changed,UID:e4f0b234-f2d2-11ea-b060-0242ac120006,ResourceVersion:747108,Generation:0,CreationTimestamp:2020-09-09 19:30:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:30:28.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-trbbp" for this suite.
Sep  9 19:30:34.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:30:34.861: INFO: namespace: e2e-tests-watch-trbbp, resource: bindings, ignored listing per whitelist
Sep  9 19:30:34.873: INFO: namespace e2e-tests-watch-trbbp deletion completed in 6.150916494s

• [SLOW TEST:16.375 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:30:34.873: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-eeb25b49-f2d2-11ea-88c2-0242ac110007
STEP: Creating a pod to test consume configMaps
Sep  9 19:30:35.003: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-eeb2d8b5-f2d2-11ea-88c2-0242ac110007" in namespace "e2e-tests-projected-8fn29" to be "success or failure"
Sep  9 19:30:35.050: INFO: Pod "pod-projected-configmaps-eeb2d8b5-f2d2-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 47.680108ms
Sep  9 19:30:37.054: INFO: Pod "pod-projected-configmaps-eeb2d8b5-f2d2-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051524673s
Sep  9 19:30:39.059: INFO: Pod "pod-projected-configmaps-eeb2d8b5-f2d2-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056270756s
STEP: Saw pod success
Sep  9 19:30:39.059: INFO: Pod "pod-projected-configmaps-eeb2d8b5-f2d2-11ea-88c2-0242ac110007" satisfied condition "success or failure"
Sep  9 19:30:39.062: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-eeb2d8b5-f2d2-11ea-88c2-0242ac110007 container projected-configmap-volume-test: 
STEP: delete the pod
Sep  9 19:30:39.086: INFO: Waiting for pod pod-projected-configmaps-eeb2d8b5-f2d2-11ea-88c2-0242ac110007 to disappear
Sep  9 19:30:39.165: INFO: Pod pod-projected-configmaps-eeb2d8b5-f2d2-11ea-88c2-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:30:39.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-8fn29" for this suite.
Sep  9 19:30:45.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:30:45.200: INFO: namespace: e2e-tests-projected-8fn29, resource: bindings, ignored listing per whitelist
Sep  9 19:30:45.254: INFO: namespace e2e-tests-projected-8fn29 deletion completed in 6.084962544s

• [SLOW TEST:10.381 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:30:45.254: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Sep  9 19:30:45.375: INFO: Waiting up to 5m0s for pod "pod-f4e1ad59-f2d2-11ea-88c2-0242ac110007" in namespace "e2e-tests-emptydir-f4g47" to be "success or failure"
Sep  9 19:30:45.393: INFO: Pod "pod-f4e1ad59-f2d2-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 17.536909ms
Sep  9 19:30:47.397: INFO: Pod "pod-f4e1ad59-f2d2-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021724454s
Sep  9 19:30:49.401: INFO: Pod "pod-f4e1ad59-f2d2-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0257435s
STEP: Saw pod success
Sep  9 19:30:49.401: INFO: Pod "pod-f4e1ad59-f2d2-11ea-88c2-0242ac110007" satisfied condition "success or failure"
Sep  9 19:30:49.403: INFO: Trying to get logs from node hunter-worker pod pod-f4e1ad59-f2d2-11ea-88c2-0242ac110007 container test-container: 
STEP: delete the pod
Sep  9 19:30:49.482: INFO: Waiting for pod pod-f4e1ad59-f2d2-11ea-88c2-0242ac110007 to disappear
Sep  9 19:30:49.488: INFO: Pod pod-f4e1ad59-f2d2-11ea-88c2-0242ac110007 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:30:49.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-f4g47" for this suite.
Sep  9 19:30:55.521: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:30:55.595: INFO: namespace: e2e-tests-emptydir-f4g47, resource: bindings, ignored listing per whitelist
Sep  9 19:30:55.602: INFO: namespace e2e-tests-emptydir-f4g47 deletion completed in 6.110880318s

• [SLOW TEST:10.348 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:30:55.602: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Sep  9 19:30:55.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-c482g'
Sep  9 19:30:58.316: INFO: stderr: ""
Sep  9 19:30:58.316: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532
Sep  9 19:30:58.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-c482g'
Sep  9 19:31:09.444: INFO: stderr: ""
Sep  9 19:31:09.444: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:31:09.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-c482g" for this suite.
Sep  9 19:31:15.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:31:15.518: INFO: namespace: e2e-tests-kubectl-c482g, resource: bindings, ignored listing per whitelist
Sep  9 19:31:15.546: INFO: namespace e2e-tests-kubectl-c482g deletion completed in 6.097739218s

• [SLOW TEST:19.944 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:31:15.546: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Sep  9 19:31:15.646: INFO: Waiting up to 5m0s for pod "pod-06ecc498-f2d3-11ea-88c2-0242ac110007" in namespace "e2e-tests-emptydir-9n4zw" to be "success or failure"
Sep  9 19:31:15.658: INFO: Pod "pod-06ecc498-f2d3-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 12.806287ms
Sep  9 19:31:17.681: INFO: Pod "pod-06ecc498-f2d3-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035176471s
Sep  9 19:31:19.710: INFO: Pod "pod-06ecc498-f2d3-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064653151s
Sep  9 19:31:21.714: INFO: Pod "pod-06ecc498-f2d3-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068214885s
Sep  9 19:31:23.718: INFO: Pod "pod-06ecc498-f2d3-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.072116462s
STEP: Saw pod success
Sep  9 19:31:23.718: INFO: Pod "pod-06ecc498-f2d3-11ea-88c2-0242ac110007" satisfied condition "success or failure"
Sep  9 19:31:23.720: INFO: Trying to get logs from node hunter-worker pod pod-06ecc498-f2d3-11ea-88c2-0242ac110007 container test-container: 
STEP: delete the pod
Sep  9 19:31:24.222: INFO: Waiting for pod pod-06ecc498-f2d3-11ea-88c2-0242ac110007 to disappear
Sep  9 19:31:24.255: INFO: Pod pod-06ecc498-f2d3-11ea-88c2-0242ac110007 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:31:24.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-9n4zw" for this suite.
Sep  9 19:31:30.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:31:30.311: INFO: namespace: e2e-tests-emptydir-9n4zw, resource: bindings, ignored listing per whitelist
Sep  9 19:31:30.336: INFO: namespace e2e-tests-emptydir-9n4zw deletion completed in 6.077236208s

• [SLOW TEST:14.790 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:31:30.336: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Sep  9 19:31:30.464: INFO: Waiting up to 5m0s for pod "client-containers-0fc191f7-f2d3-11ea-88c2-0242ac110007" in namespace "e2e-tests-containers-lqghc" to be "success or failure"
Sep  9 19:31:30.496: INFO: Pod "client-containers-0fc191f7-f2d3-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 31.785576ms
Sep  9 19:31:32.799: INFO: Pod "client-containers-0fc191f7-f2d3-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.334349876s
Sep  9 19:31:34.802: INFO: Pod "client-containers-0fc191f7-f2d3-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.337813322s
Sep  9 19:31:36.805: INFO: Pod "client-containers-0fc191f7-f2d3-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.340407074s
Sep  9 19:31:38.808: INFO: Pod "client-containers-0fc191f7-f2d3-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.343719939s
STEP: Saw pod success
Sep  9 19:31:38.808: INFO: Pod "client-containers-0fc191f7-f2d3-11ea-88c2-0242ac110007" satisfied condition "success or failure"
Sep  9 19:31:38.810: INFO: Trying to get logs from node hunter-worker2 pod client-containers-0fc191f7-f2d3-11ea-88c2-0242ac110007 container test-container: 
STEP: delete the pod
Sep  9 19:31:38.847: INFO: Waiting for pod client-containers-0fc191f7-f2d3-11ea-88c2-0242ac110007 to disappear
Sep  9 19:31:38.852: INFO: Pod client-containers-0fc191f7-f2d3-11ea-88c2-0242ac110007 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:31:38.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-lqghc" for this suite.
Sep  9 19:31:44.868: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:31:44.892: INFO: namespace: e2e-tests-containers-lqghc, resource: bindings, ignored listing per whitelist
Sep  9 19:31:44.947: INFO: namespace e2e-tests-containers-lqghc deletion completed in 6.09173389s

• [SLOW TEST:14.611 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:31:44.947: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Sep  9 19:31:45.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-6mrrm'
Sep  9 19:31:45.321: INFO: stderr: ""
Sep  9 19:31:45.321: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Sep  9 19:31:46.326: INFO: Selector matched 1 pods for map[app:redis]
Sep  9 19:31:46.327: INFO: Found 0 / 1
Sep  9 19:31:47.340: INFO: Selector matched 1 pods for map[app:redis]
Sep  9 19:31:47.340: INFO: Found 0 / 1
Sep  9 19:31:48.324: INFO: Selector matched 1 pods for map[app:redis]
Sep  9 19:31:48.324: INFO: Found 0 / 1
Sep  9 19:31:49.324: INFO: Selector matched 1 pods for map[app:redis]
Sep  9 19:31:49.324: INFO: Found 1 / 1
Sep  9 19:31:49.324: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Sep  9 19:31:49.327: INFO: Selector matched 1 pods for map[app:redis]
Sep  9 19:31:49.327: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Sep  9 19:31:49.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-htc24 --namespace=e2e-tests-kubectl-6mrrm -p {"metadata":{"annotations":{"x":"y"}}}'
Sep  9 19:31:49.453: INFO: stderr: ""
Sep  9 19:31:49.453: INFO: stdout: "pod/redis-master-htc24 patched\n"
STEP: checking annotations
Sep  9 19:31:49.477: INFO: Selector matched 1 pods for map[app:redis]
Sep  9 19:31:49.477: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:31:49.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-6mrrm" for this suite.
Sep  9 19:32:11.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:32:11.564: INFO: namespace: e2e-tests-kubectl-6mrrm, resource: bindings, ignored listing per whitelist
Sep  9 19:32:11.578: INFO: namespace e2e-tests-kubectl-6mrrm deletion completed in 22.096981356s

• [SLOW TEST:26.631 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:32:11.579: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Sep  9 19:32:11.714: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Sep  9 19:32:16.718: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Sep  9 19:32:16.719: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Sep  9 19:32:16.741: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-nxhg2,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nxhg2/deployments/test-cleanup-deployment,UID:2b553a3f-f2d3-11ea-b060-0242ac120006,ResourceVersion:747495,Generation:1,CreationTimestamp:2020-09-09 19:32:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Sep  9 19:32:16.747: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
Sep  9 19:32:16.747: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Sep  9 19:32:16.747: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-nxhg2,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nxhg2/replicasets/test-cleanup-controller,UID:285272e6-f2d3-11ea-b060-0242ac120006,ResourceVersion:747496,Generation:1,CreationTimestamp:2020-09-09 19:32:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 2b553a3f-f2d3-11ea-b060-0242ac120006 0xc002cab197 0xc002cab198}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Sep  9 19:32:16.753: INFO: Pod "test-cleanup-controller-w4gdm" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-w4gdm,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-nxhg2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nxhg2/pods/test-cleanup-controller-w4gdm,UID:285965d0-f2d3-11ea-b060-0242ac120006,ResourceVersion:747489,Generation:0,CreationTimestamp:2020-09-09 19:32:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 285272e6-f2d3-11ea-b060-0242ac120006 0xc002cab717 0xc002cab718}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-k9csx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k9csx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-k9csx true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cab790} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002cab7b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:32:11 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:32:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:32:14 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-09 19:32:11 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.2.114,StartTime:2020-09-09 19:32:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-09-09 19:32:14 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://fa40e524ced36aaa63916f75b697e019d12bfbb51f204e6c3f91f97b83ac4d99}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:32:16.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-nxhg2" for this suite.
Sep  9 19:32:22.840: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:32:22.900: INFO: namespace: e2e-tests-deployment-nxhg2, resource: bindings, ignored listing per whitelist
Sep  9 19:32:22.919: INFO: namespace e2e-tests-deployment-nxhg2 deletion completed in 6.142577605s

• [SLOW TEST:11.340 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:32:22.919: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-2f19d238-f2d3-11ea-88c2-0242ac110007
STEP: Creating a pod to test consume secrets
Sep  9 19:32:23.059: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2f1a59b1-f2d3-11ea-88c2-0242ac110007" in namespace "e2e-tests-projected-m8sl6" to be "success or failure"
Sep  9 19:32:23.079: INFO: Pod "pod-projected-secrets-2f1a59b1-f2d3-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 20.504963ms
Sep  9 19:32:25.089: INFO: Pod "pod-projected-secrets-2f1a59b1-f2d3-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029938766s
Sep  9 19:32:27.092: INFO: Pod "pod-projected-secrets-2f1a59b1-f2d3-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033461555s
STEP: Saw pod success
Sep  9 19:32:27.092: INFO: Pod "pod-projected-secrets-2f1a59b1-f2d3-11ea-88c2-0242ac110007" satisfied condition "success or failure"
Sep  9 19:32:27.095: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-2f1a59b1-f2d3-11ea-88c2-0242ac110007 container projected-secret-volume-test: 
STEP: delete the pod
Sep  9 19:32:27.113: INFO: Waiting for pod pod-projected-secrets-2f1a59b1-f2d3-11ea-88c2-0242ac110007 to disappear
Sep  9 19:32:27.167: INFO: Pod pod-projected-secrets-2f1a59b1-f2d3-11ea-88c2-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:32:27.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-m8sl6" for this suite.
Sep  9 19:32:33.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:32:33.198: INFO: namespace: e2e-tests-projected-m8sl6, resource: bindings, ignored listing per whitelist
Sep  9 19:32:33.285: INFO: namespace e2e-tests-projected-m8sl6 deletion completed in 6.113915726s

• [SLOW TEST:10.366 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:32:33.285: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-354b7f76-f2d3-11ea-88c2-0242ac110007
STEP: Creating a pod to test consume secrets
Sep  9 19:32:33.457: INFO: Waiting up to 5m0s for pod "pod-secrets-354cf300-f2d3-11ea-88c2-0242ac110007" in namespace "e2e-tests-secrets-gntvs" to be "success or failure"
Sep  9 19:32:33.459: INFO: Pod "pod-secrets-354cf300-f2d3-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.157285ms
Sep  9 19:32:35.463: INFO: Pod "pod-secrets-354cf300-f2d3-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006205406s
Sep  9 19:32:37.467: INFO: Pod "pod-secrets-354cf300-f2d3-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01027231s
STEP: Saw pod success
Sep  9 19:32:37.467: INFO: Pod "pod-secrets-354cf300-f2d3-11ea-88c2-0242ac110007" satisfied condition "success or failure"
Sep  9 19:32:37.470: INFO: Trying to get logs from node hunter-worker pod pod-secrets-354cf300-f2d3-11ea-88c2-0242ac110007 container secret-volume-test: 
STEP: delete the pod
Sep  9 19:32:37.540: INFO: Waiting for pod pod-secrets-354cf300-f2d3-11ea-88c2-0242ac110007 to disappear
Sep  9 19:32:37.591: INFO: Pod pod-secrets-354cf300-f2d3-11ea-88c2-0242ac110007 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:32:37.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-gntvs" for this suite.
Sep  9 19:32:43.631: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:32:43.662: INFO: namespace: e2e-tests-secrets-gntvs, resource: bindings, ignored listing per whitelist
Sep  9 19:32:43.703: INFO: namespace e2e-tests-secrets-gntvs deletion completed in 6.103668107s

• [SLOW TEST:10.418 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:32:43.703: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Sep  9 19:32:43.816: INFO: Waiting up to 5m0s for pod "downward-api-3b7852e9-f2d3-11ea-88c2-0242ac110007" in namespace "e2e-tests-downward-api-b2wgm" to be "success or failure"
Sep  9 19:32:43.820: INFO: Pod "downward-api-3b7852e9-f2d3-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 3.446796ms
Sep  9 19:32:46.107: INFO: Pod "downward-api-3b7852e9-f2d3-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290839001s
Sep  9 19:32:48.111: INFO: Pod "downward-api-3b7852e9-f2d3-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.294987827s
STEP: Saw pod success
Sep  9 19:32:48.111: INFO: Pod "downward-api-3b7852e9-f2d3-11ea-88c2-0242ac110007" satisfied condition "success or failure"
Sep  9 19:32:48.114: INFO: Trying to get logs from node hunter-worker2 pod downward-api-3b7852e9-f2d3-11ea-88c2-0242ac110007 container dapi-container: 
STEP: delete the pod
Sep  9 19:32:48.444: INFO: Waiting for pod downward-api-3b7852e9-f2d3-11ea-88c2-0242ac110007 to disappear
Sep  9 19:32:48.465: INFO: Pod downward-api-3b7852e9-f2d3-11ea-88c2-0242ac110007 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:32:48.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-b2wgm" for this suite.
Sep  9 19:32:54.480: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:32:54.540: INFO: namespace: e2e-tests-downward-api-b2wgm, resource: bindings, ignored listing per whitelist
Sep  9 19:32:54.577: INFO: namespace e2e-tests-downward-api-b2wgm deletion completed in 6.108333628s

• [SLOW TEST:10.874 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:32:54.578: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Sep  9 19:32:54.805: INFO: Waiting up to 5m0s for pod "var-expansion-41f3f37b-f2d3-11ea-88c2-0242ac110007" in namespace "e2e-tests-var-expansion-8qs84" to be "success or failure"
Sep  9 19:32:54.850: INFO: Pod "var-expansion-41f3f37b-f2d3-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 44.756978ms
Sep  9 19:32:56.853: INFO: Pod "var-expansion-41f3f37b-f2d3-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048547714s
Sep  9 19:32:58.858: INFO: Pod "var-expansion-41f3f37b-f2d3-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052693764s
STEP: Saw pod success
Sep  9 19:32:58.858: INFO: Pod "var-expansion-41f3f37b-f2d3-11ea-88c2-0242ac110007" satisfied condition "success or failure"
Sep  9 19:32:58.861: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-41f3f37b-f2d3-11ea-88c2-0242ac110007 container dapi-container: 
STEP: delete the pod
Sep  9 19:32:58.904: INFO: Waiting for pod var-expansion-41f3f37b-f2d3-11ea-88c2-0242ac110007 to disappear
Sep  9 19:32:58.909: INFO: Pod var-expansion-41f3f37b-f2d3-11ea-88c2-0242ac110007 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:32:58.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-8qs84" for this suite.
Sep  9 19:33:04.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:33:04.983: INFO: namespace: e2e-tests-var-expansion-8qs84, resource: bindings, ignored listing per whitelist
Sep  9 19:33:04.998: INFO: namespace e2e-tests-var-expansion-8qs84 deletion completed in 6.085648987s

• [SLOW TEST:10.420 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:33:04.998: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Sep  9 19:33:05.628: INFO: Waiting up to 5m0s for pod "pod-service-account-487a06c4-f2d3-11ea-88c2-0242ac110007-vrrql" in namespace "e2e-tests-svcaccounts-bk7x8" to be "success or failure"
Sep  9 19:33:05.669: INFO: Pod "pod-service-account-487a06c4-f2d3-11ea-88c2-0242ac110007-vrrql": Phase="Pending", Reason="", readiness=false. Elapsed: 41.674927ms
Sep  9 19:33:07.674: INFO: Pod "pod-service-account-487a06c4-f2d3-11ea-88c2-0242ac110007-vrrql": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046545853s
Sep  9 19:33:09.868: INFO: Pod "pod-service-account-487a06c4-f2d3-11ea-88c2-0242ac110007-vrrql": Phase="Pending", Reason="", readiness=false. Elapsed: 4.240394949s
Sep  9 19:33:11.872: INFO: Pod "pod-service-account-487a06c4-f2d3-11ea-88c2-0242ac110007-vrrql": Phase="Running", Reason="", readiness=false. Elapsed: 6.244403906s
Sep  9 19:33:13.875: INFO: Pod "pod-service-account-487a06c4-f2d3-11ea-88c2-0242ac110007-vrrql": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.247765864s
STEP: Saw pod success
Sep  9 19:33:13.876: INFO: Pod "pod-service-account-487a06c4-f2d3-11ea-88c2-0242ac110007-vrrql" satisfied condition "success or failure"
Sep  9 19:33:13.878: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-487a06c4-f2d3-11ea-88c2-0242ac110007-vrrql container token-test: 
STEP: delete the pod
Sep  9 19:33:13.922: INFO: Waiting for pod pod-service-account-487a06c4-f2d3-11ea-88c2-0242ac110007-vrrql to disappear
Sep  9 19:33:13.940: INFO: Pod pod-service-account-487a06c4-f2d3-11ea-88c2-0242ac110007-vrrql no longer exists
STEP: Creating a pod to test consume service account root CA
Sep  9 19:33:13.943: INFO: Waiting up to 5m0s for pod "pod-service-account-487a06c4-f2d3-11ea-88c2-0242ac110007-znlg9" in namespace "e2e-tests-svcaccounts-bk7x8" to be "success or failure"
Sep  9 19:33:13.963: INFO: Pod "pod-service-account-487a06c4-f2d3-11ea-88c2-0242ac110007-znlg9": Phase="Pending", Reason="", readiness=false. Elapsed: 19.138294ms
Sep  9 19:33:16.023: INFO: Pod "pod-service-account-487a06c4-f2d3-11ea-88c2-0242ac110007-znlg9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07922552s
Sep  9 19:33:18.062: INFO: Pod "pod-service-account-487a06c4-f2d3-11ea-88c2-0242ac110007-znlg9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118413843s
Sep  9 19:33:20.066: INFO: Pod "pod-service-account-487a06c4-f2d3-11ea-88c2-0242ac110007-znlg9": Phase="Running", Reason="", readiness=false. Elapsed: 6.122216147s
Sep  9 19:33:22.069: INFO: Pod "pod-service-account-487a06c4-f2d3-11ea-88c2-0242ac110007-znlg9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.125599856s
STEP: Saw pod success
Sep  9 19:33:22.069: INFO: Pod "pod-service-account-487a06c4-f2d3-11ea-88c2-0242ac110007-znlg9" satisfied condition "success or failure"
Sep  9 19:33:22.071: INFO: Trying to get logs from node hunter-worker pod pod-service-account-487a06c4-f2d3-11ea-88c2-0242ac110007-znlg9 container root-ca-test: 
STEP: delete the pod
Sep  9 19:33:22.121: INFO: Waiting for pod pod-service-account-487a06c4-f2d3-11ea-88c2-0242ac110007-znlg9 to disappear
Sep  9 19:33:22.128: INFO: Pod pod-service-account-487a06c4-f2d3-11ea-88c2-0242ac110007-znlg9 no longer exists
STEP: Creating a pod to test consume service account namespace
Sep  9 19:33:22.132: INFO: Waiting up to 5m0s for pod "pod-service-account-487a06c4-f2d3-11ea-88c2-0242ac110007-tq56t" in namespace "e2e-tests-svcaccounts-bk7x8" to be "success or failure"
Sep  9 19:33:22.161: INFO: Pod "pod-service-account-487a06c4-f2d3-11ea-88c2-0242ac110007-tq56t": Phase="Pending", Reason="", readiness=false. Elapsed: 28.946348ms
Sep  9 19:33:24.168: INFO: Pod "pod-service-account-487a06c4-f2d3-11ea-88c2-0242ac110007-tq56t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03599829s
Sep  9 19:33:26.226: INFO: Pod "pod-service-account-487a06c4-f2d3-11ea-88c2-0242ac110007-tq56t": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093553067s
Sep  9 19:33:28.354: INFO: Pod "pod-service-account-487a06c4-f2d3-11ea-88c2-0242ac110007-tq56t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.221353326s
STEP: Saw pod success
Sep  9 19:33:28.354: INFO: Pod "pod-service-account-487a06c4-f2d3-11ea-88c2-0242ac110007-tq56t" satisfied condition "success or failure"
Sep  9 19:33:28.356: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-487a06c4-f2d3-11ea-88c2-0242ac110007-tq56t container namespace-test: 
STEP: delete the pod
Sep  9 19:33:28.431: INFO: Waiting for pod pod-service-account-487a06c4-f2d3-11ea-88c2-0242ac110007-tq56t to disappear
Sep  9 19:33:28.437: INFO: Pod pod-service-account-487a06c4-f2d3-11ea-88c2-0242ac110007-tq56t no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:33:28.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-bk7x8" for this suite.
Sep  9 19:33:34.453: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:33:34.515: INFO: namespace: e2e-tests-svcaccounts-bk7x8, resource: bindings, ignored listing per whitelist
Sep  9 19:33:34.538: INFO: namespace e2e-tests-svcaccounts-bk7x8 deletion completed in 6.095919905s

• [SLOW TEST:29.540 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:33:34.538: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-lts9c
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-lts9c to expose endpoints map[]
Sep  9 19:33:34.731: INFO: Get endpoints failed (10.075021ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Sep  9 19:33:35.735: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-lts9c exposes endpoints map[] (1.013961776s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-lts9c
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-lts9c to expose endpoints map[pod1:[80]]
Sep  9 19:33:38.783: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-lts9c exposes endpoints map[pod1:[80]] (3.04069423s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-lts9c
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-lts9c to expose endpoints map[pod1:[80] pod2:[80]]
Sep  9 19:33:42.851: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-lts9c exposes endpoints map[pod2:[80] pod1:[80]] (4.063991289s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-lts9c
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-lts9c to expose endpoints map[pod2:[80]]
Sep  9 19:33:43.915: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-lts9c exposes endpoints map[pod2:[80]] (1.060024284s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-lts9c
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-lts9c to expose endpoints map[]
Sep  9 19:33:44.932: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-lts9c exposes endpoints map[] (1.012376078s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:33:44.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-lts9c" for this suite.
Sep  9 19:33:50.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:33:50.995: INFO: namespace: e2e-tests-services-lts9c, resource: bindings, ignored listing per whitelist
Sep  9 19:33:51.072: INFO: namespace e2e-tests-services-lts9c deletion completed in 6.103037897s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:16.534 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:33:51.072: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-63a3fe31-f2d3-11ea-88c2-0242ac110007
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:33:55.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-gv76p" for this suite.
Sep  9 19:34:17.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:34:17.573: INFO: namespace: e2e-tests-configmap-gv76p, resource: bindings, ignored listing per whitelist
Sep  9 19:34:17.633: INFO: namespace e2e-tests-configmap-gv76p deletion completed in 22.111604188s

• [SLOW TEST:26.561 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:34:17.634: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Sep  9 19:34:17.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-kthmz'
Sep  9 19:34:17.798: INFO: stderr: ""
Sep  9 19:34:17.798: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Sep  9 19:34:22.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-kthmz -o json'
Sep  9 19:34:22.942: INFO: stderr: ""
Sep  9 19:34:22.942: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-09-09T19:34:17Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"e2e-tests-kubectl-kthmz\",\n        \"resourceVersion\": \"748065\",\n        \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-kthmz/pods/e2e-test-nginx-pod\",\n        \"uid\": \"737da531-f2d3-11ea-b060-0242ac120006\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-swppx\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"hunter-worker2\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-swppx\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-swppx\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-09-09T19:34:17Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-09-09T19:34:21Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-09-09T19:34:21Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-09-09T19:34:17Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://1610898a3a63c7a188f512bfbaf96a109f9d5d027480773bc71bea0ee181a321\",\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-09-09T19:34:20Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.18.0.7\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.2.122\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-09-09T19:34:17Z\"\n    }\n}\n"
STEP: replace the image in the pod
Sep  9 19:34:22.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-kthmz'
Sep  9 19:34:23.186: INFO: stderr: ""
Sep  9 19:34:23.187: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568
Sep  9 19:34:23.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-kthmz'
Sep  9 19:34:30.073: INFO: stderr: ""
Sep  9 19:34:30.073: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:34:30.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-kthmz" for this suite.
Sep  9 19:34:36.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:34:36.114: INFO: namespace: e2e-tests-kubectl-kthmz, resource: bindings, ignored listing per whitelist
Sep  9 19:34:36.167: INFO: namespace e2e-tests-kubectl-kthmz deletion completed in 6.090395456s

• [SLOW TEST:18.534 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:34:36.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Sep  9 19:34:36.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-rn59n'
Sep  9 19:34:36.362: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Sep  9 19:34:36.362: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Sep  9 19:34:40.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-rn59n'
Sep  9 19:34:40.490: INFO: stderr: ""
Sep  9 19:34:40.491: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:34:40.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-rn59n" for this suite.
Sep  9 19:35:02.508: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:35:02.526: INFO: namespace: e2e-tests-kubectl-rn59n, resource: bindings, ignored listing per whitelist
Sep  9 19:35:02.654: INFO: namespace e2e-tests-kubectl-rn59n deletion completed in 22.159366934s

• [SLOW TEST:26.486 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:35:02.654: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-trlx9
Sep  9 19:35:06.785: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-trlx9
STEP: checking the pod's current state and verifying that restartCount is present
Sep  9 19:35:06.787: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:39:07.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-trlx9" for this suite.
Sep  9 19:39:13.771: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:39:13.804: INFO: namespace: e2e-tests-container-probe-trlx9, resource: bindings, ignored listing per whitelist
Sep  9 19:39:13.844: INFO: namespace e2e-tests-container-probe-trlx9 deletion completed in 6.081411567s

• [SLOW TEST:251.189 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:39:13.844: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-24069b48-f2d4-11ea-88c2-0242ac110007
STEP: Creating a pod to test consume configMaps
Sep  9 19:39:13.991: INFO: Waiting up to 5m0s for pod "pod-configmaps-2409f708-f2d4-11ea-88c2-0242ac110007" in namespace "e2e-tests-configmap-vvmvv" to be "success or failure"
Sep  9 19:39:14.013: INFO: Pod "pod-configmaps-2409f708-f2d4-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 21.391212ms
Sep  9 19:39:16.016: INFO: Pod "pod-configmaps-2409f708-f2d4-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025166264s
Sep  9 19:39:18.020: INFO: Pod "pod-configmaps-2409f708-f2d4-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028677351s
STEP: Saw pod success
Sep  9 19:39:18.020: INFO: Pod "pod-configmaps-2409f708-f2d4-11ea-88c2-0242ac110007" satisfied condition "success or failure"
Sep  9 19:39:18.023: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-2409f708-f2d4-11ea-88c2-0242ac110007 container configmap-volume-test: 
STEP: delete the pod
Sep  9 19:39:18.059: INFO: Waiting for pod pod-configmaps-2409f708-f2d4-11ea-88c2-0242ac110007 to disappear
Sep  9 19:39:18.095: INFO: Pod pod-configmaps-2409f708-f2d4-11ea-88c2-0242ac110007 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:39:18.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-vvmvv" for this suite.
Sep  9 19:39:24.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:39:24.176: INFO: namespace: e2e-tests-configmap-vvmvv, resource: bindings, ignored listing per whitelist
Sep  9 19:39:24.192: INFO: namespace e2e-tests-configmap-vvmvv deletion completed in 6.092819713s

• [SLOW TEST:10.349 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:39:24.193: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-c7rm
STEP: Creating a pod to test atomic-volume-subpath
Sep  9 19:39:24.310: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-c7rm" in namespace "e2e-tests-subpath-8tzkz" to be "success or failure"
Sep  9 19:39:24.373: INFO: Pod "pod-subpath-test-projected-c7rm": Phase="Pending", Reason="", readiness=false. Elapsed: 63.226596ms
Sep  9 19:39:26.377: INFO: Pod "pod-subpath-test-projected-c7rm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066827469s
Sep  9 19:39:28.381: INFO: Pod "pod-subpath-test-projected-c7rm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070675811s
Sep  9 19:39:30.385: INFO: Pod "pod-subpath-test-projected-c7rm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07455551s
Sep  9 19:39:32.389: INFO: Pod "pod-subpath-test-projected-c7rm": Phase="Running", Reason="", readiness=false. Elapsed: 8.079022471s
Sep  9 19:39:34.397: INFO: Pod "pod-subpath-test-projected-c7rm": Phase="Running", Reason="", readiness=false. Elapsed: 10.087089476s
Sep  9 19:39:36.405: INFO: Pod "pod-subpath-test-projected-c7rm": Phase="Running", Reason="", readiness=false. Elapsed: 12.094880873s
Sep  9 19:39:38.409: INFO: Pod "pod-subpath-test-projected-c7rm": Phase="Running", Reason="", readiness=false. Elapsed: 14.099018066s
Sep  9 19:39:40.413: INFO: Pod "pod-subpath-test-projected-c7rm": Phase="Running", Reason="", readiness=false. Elapsed: 16.10261041s
Sep  9 19:39:42.417: INFO: Pod "pod-subpath-test-projected-c7rm": Phase="Running", Reason="", readiness=false. Elapsed: 18.106868469s
Sep  9 19:39:44.421: INFO: Pod "pod-subpath-test-projected-c7rm": Phase="Running", Reason="", readiness=false. Elapsed: 20.111130245s
Sep  9 19:39:46.426: INFO: Pod "pod-subpath-test-projected-c7rm": Phase="Running", Reason="", readiness=false. Elapsed: 22.115578613s
Sep  9 19:39:48.430: INFO: Pod "pod-subpath-test-projected-c7rm": Phase="Running", Reason="", readiness=false. Elapsed: 24.119665203s
Sep  9 19:39:50.433: INFO: Pod "pod-subpath-test-projected-c7rm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.123220436s
STEP: Saw pod success
Sep  9 19:39:50.433: INFO: Pod "pod-subpath-test-projected-c7rm" satisfied condition "success or failure"
Sep  9 19:39:50.436: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-projected-c7rm container test-container-subpath-projected-c7rm: 
STEP: delete the pod
Sep  9 19:39:50.469: INFO: Waiting for pod pod-subpath-test-projected-c7rm to disappear
Sep  9 19:39:50.493: INFO: Pod pod-subpath-test-projected-c7rm no longer exists
STEP: Deleting pod pod-subpath-test-projected-c7rm
Sep  9 19:39:50.493: INFO: Deleting pod "pod-subpath-test-projected-c7rm" in namespace "e2e-tests-subpath-8tzkz"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:39:50.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-8tzkz" for this suite.
Sep  9 19:39:56.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:39:56.521: INFO: namespace: e2e-tests-subpath-8tzkz, resource: bindings, ignored listing per whitelist
Sep  9 19:39:56.587: INFO: namespace e2e-tests-subpath-8tzkz deletion completed in 6.086950015s

• [SLOW TEST:32.394 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:39:56.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Sep  9 19:39:56.674: INFO: Waiting up to 5m0s for pod "var-expansion-3d78d008-f2d4-11ea-88c2-0242ac110007" in namespace "e2e-tests-var-expansion-5k7vv" to be "success or failure"
Sep  9 19:39:56.681: INFO: Pod "var-expansion-3d78d008-f2d4-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 7.013085ms
Sep  9 19:39:58.685: INFO: Pod "var-expansion-3d78d008-f2d4-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01120297s
Sep  9 19:40:00.690: INFO: Pod "var-expansion-3d78d008-f2d4-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015812243s
STEP: Saw pod success
Sep  9 19:40:00.690: INFO: Pod "var-expansion-3d78d008-f2d4-11ea-88c2-0242ac110007" satisfied condition "success or failure"
Sep  9 19:40:00.693: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-3d78d008-f2d4-11ea-88c2-0242ac110007 container dapi-container: 
STEP: delete the pod
Sep  9 19:40:00.712: INFO: Waiting for pod var-expansion-3d78d008-f2d4-11ea-88c2-0242ac110007 to disappear
Sep  9 19:40:00.718: INFO: Pod var-expansion-3d78d008-f2d4-11ea-88c2-0242ac110007 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:40:00.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-5k7vv" for this suite.
Sep  9 19:40:06.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:40:06.767: INFO: namespace: e2e-tests-var-expansion-5k7vv, resource: bindings, ignored listing per whitelist
Sep  9 19:40:06.813: INFO: namespace e2e-tests-var-expansion-5k7vv deletion completed in 6.091421616s

• [SLOW TEST:10.226 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:40:06.814: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Sep  9 19:40:06.948: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:40:07.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-5ts4d" for this suite.
Sep  9 19:40:13.053: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:40:13.061: INFO: namespace: e2e-tests-kubectl-5ts4d, resource: bindings, ignored listing per whitelist
Sep  9 19:40:13.132: INFO: namespace e2e-tests-kubectl-5ts4d deletion completed in 6.091616451s

• [SLOW TEST:6.318 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:40:13.132: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Sep  9 19:40:13.231: INFO: (0) /api/v1/nodes/hunter-worker:10250/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Sep  9 19:40:27.574: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Sep  9 19:40:27.580: INFO: Pod pod-with-poststart-http-hook still exists
Sep  9 19:40:29.580: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Sep  9 19:40:29.678: INFO: Pod pod-with-poststart-http-hook still exists
Sep  9 19:40:31.580: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Sep  9 19:40:31.585: INFO: Pod pod-with-poststart-http-hook still exists
Sep  9 19:40:33.580: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Sep  9 19:40:33.584: INFO: Pod pod-with-poststart-http-hook still exists
Sep  9 19:40:35.580: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Sep  9 19:40:35.584: INFO: Pod pod-with-poststart-http-hook still exists
Sep  9 19:40:37.580: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Sep  9 19:40:37.585: INFO: Pod pod-with-poststart-http-hook still exists
Sep  9 19:40:39.580: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Sep  9 19:40:39.585: INFO: Pod pod-with-poststart-http-hook still exists
Sep  9 19:40:41.580: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Sep  9 19:40:41.584: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:40:41.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-gwd9n" for this suite.
Sep  9 19:41:03.653: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:41:03.666: INFO: namespace: e2e-tests-container-lifecycle-hook-gwd9n, resource: bindings, ignored listing per whitelist
Sep  9 19:41:03.727: INFO: namespace e2e-tests-container-lifecycle-hook-gwd9n deletion completed in 22.138446924s

• [SLOW TEST:44.304 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:41:03.728: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override command
Sep  9 19:41:03.865: INFO: Waiting up to 5m0s for pod "client-containers-6585bae3-f2d4-11ea-88c2-0242ac110007" in namespace "e2e-tests-containers-jgvr8" to be "success or failure"
Sep  9 19:41:03.868: INFO: Pod "client-containers-6585bae3-f2d4-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 3.381781ms
Sep  9 19:41:05.872: INFO: Pod "client-containers-6585bae3-f2d4-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006958546s
Sep  9 19:41:07.876: INFO: Pod "client-containers-6585bae3-f2d4-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011607651s
STEP: Saw pod success
Sep  9 19:41:07.876: INFO: Pod "client-containers-6585bae3-f2d4-11ea-88c2-0242ac110007" satisfied condition "success or failure"
Sep  9 19:41:07.880: INFO: Trying to get logs from node hunter-worker2 pod client-containers-6585bae3-f2d4-11ea-88c2-0242ac110007 container test-container: 
STEP: delete the pod
Sep  9 19:41:07.900: INFO: Waiting for pod client-containers-6585bae3-f2d4-11ea-88c2-0242ac110007 to disappear
Sep  9 19:41:07.917: INFO: Pod client-containers-6585bae3-f2d4-11ea-88c2-0242ac110007 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:41:07.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-jgvr8" for this suite.
Sep  9 19:41:13.974: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:41:14.021: INFO: namespace: e2e-tests-containers-jgvr8, resource: bindings, ignored listing per whitelist
Sep  9 19:41:14.059: INFO: namespace e2e-tests-containers-jgvr8 deletion completed in 6.103634502s

• [SLOW TEST:10.331 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:41:14.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-6bb2a14e-f2d4-11ea-88c2-0242ac110007
STEP: Creating a pod to test consume secrets
Sep  9 19:41:14.217: INFO: Waiting up to 5m0s for pod "pod-secrets-6bb31cc6-f2d4-11ea-88c2-0242ac110007" in namespace "e2e-tests-secrets-fg94g" to be "success or failure"
Sep  9 19:41:14.219: INFO: Pod "pod-secrets-6bb31cc6-f2d4-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.632579ms
Sep  9 19:41:16.223: INFO: Pod "pod-secrets-6bb31cc6-f2d4-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006876704s
Sep  9 19:41:18.228: INFO: Pod "pod-secrets-6bb31cc6-f2d4-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010986054s
STEP: Saw pod success
Sep  9 19:41:18.228: INFO: Pod "pod-secrets-6bb31cc6-f2d4-11ea-88c2-0242ac110007" satisfied condition "success or failure"
Sep  9 19:41:18.230: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-6bb31cc6-f2d4-11ea-88c2-0242ac110007 container secret-volume-test: 
STEP: delete the pod
Sep  9 19:41:18.293: INFO: Waiting for pod pod-secrets-6bb31cc6-f2d4-11ea-88c2-0242ac110007 to disappear
Sep  9 19:41:18.385: INFO: Pod pod-secrets-6bb31cc6-f2d4-11ea-88c2-0242ac110007 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:41:18.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-fg94g" for this suite.
Sep  9 19:41:24.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:41:24.487: INFO: namespace: e2e-tests-secrets-fg94g, resource: bindings, ignored listing per whitelist
Sep  9 19:41:24.489: INFO: namespace e2e-tests-secrets-fg94g deletion completed in 6.099804743s

• [SLOW TEST:10.431 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:41:24.490: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Sep  9 19:41:24.626: INFO: Waiting up to 5m0s for pod "downwardapi-volume-71e6ebda-f2d4-11ea-88c2-0242ac110007" in namespace "e2e-tests-downward-api-2g6vr" to be "success or failure"
Sep  9 19:41:24.629: INFO: Pod "downwardapi-volume-71e6ebda-f2d4-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 3.6767ms
Sep  9 19:41:26.672: INFO: Pod "downwardapi-volume-71e6ebda-f2d4-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046163587s
Sep  9 19:41:28.675: INFO: Pod "downwardapi-volume-71e6ebda-f2d4-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049626283s
STEP: Saw pod success
Sep  9 19:41:28.675: INFO: Pod "downwardapi-volume-71e6ebda-f2d4-11ea-88c2-0242ac110007" satisfied condition "success or failure"
Sep  9 19:41:28.678: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-71e6ebda-f2d4-11ea-88c2-0242ac110007 container client-container: 
STEP: delete the pod
Sep  9 19:41:28.739: INFO: Waiting for pod downwardapi-volume-71e6ebda-f2d4-11ea-88c2-0242ac110007 to disappear
Sep  9 19:41:28.743: INFO: Pod downwardapi-volume-71e6ebda-f2d4-11ea-88c2-0242ac110007 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:41:28.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-2g6vr" for this suite.
Sep  9 19:41:34.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:41:34.837: INFO: namespace: e2e-tests-downward-api-2g6vr, resource: bindings, ignored listing per whitelist
Sep  9 19:41:34.842: INFO: namespace e2e-tests-downward-api-2g6vr deletion completed in 6.096983219s

• [SLOW TEST:10.353 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:41:34.843: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Sep  9 19:41:54.973: INFO: Container started at 2020-09-09 19:41:37 +0000 UTC, pod became ready at 2020-09-09 19:41:54 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:41:54.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-grg8b" for this suite.
Sep  9 19:42:17.003: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:42:17.073: INFO: namespace: e2e-tests-container-probe-grg8b, resource: bindings, ignored listing per whitelist
Sep  9 19:42:17.080: INFO: namespace e2e-tests-container-probe-grg8b deletion completed in 22.103949805s

• [SLOW TEST:42.237 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:42:17.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Sep  9 19:42:17.187: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9139f1c3-f2d4-11ea-88c2-0242ac110007" in namespace "e2e-tests-projected-wdkpm" to be "success or failure"
Sep  9 19:42:17.191: INFO: Pod "downwardapi-volume-9139f1c3-f2d4-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.001976ms
Sep  9 19:42:19.249: INFO: Pod "downwardapi-volume-9139f1c3-f2d4-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061618425s
Sep  9 19:42:21.253: INFO: Pod "downwardapi-volume-9139f1c3-f2d4-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065795187s
STEP: Saw pod success
Sep  9 19:42:21.253: INFO: Pod "downwardapi-volume-9139f1c3-f2d4-11ea-88c2-0242ac110007" satisfied condition "success or failure"
Sep  9 19:42:21.256: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-9139f1c3-f2d4-11ea-88c2-0242ac110007 container client-container: 
STEP: delete the pod
Sep  9 19:42:21.295: INFO: Waiting for pod downwardapi-volume-9139f1c3-f2d4-11ea-88c2-0242ac110007 to disappear
Sep  9 19:42:21.305: INFO: Pod downwardapi-volume-9139f1c3-f2d4-11ea-88c2-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:42:21.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-wdkpm" for this suite.
Sep  9 19:42:27.320: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:42:27.397: INFO: namespace: e2e-tests-projected-wdkpm, resource: bindings, ignored listing per whitelist
Sep  9 19:42:27.422: INFO: namespace e2e-tests-projected-wdkpm deletion completed in 6.114610614s

• [SLOW TEST:10.342 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:42:27.422: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Sep  9 19:42:27.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-2x6r5'
Sep  9 19:42:30.105: INFO: stderr: ""
Sep  9 19:42:30.105: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Sep  9 19:42:30.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2x6r5'
Sep  9 19:42:30.228: INFO: stderr: ""
Sep  9 19:42:30.228: INFO: stdout: "update-demo-nautilus-4cfd8 update-demo-nautilus-wjwh9 "
Sep  9 19:42:30.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4cfd8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2x6r5'
Sep  9 19:42:30.337: INFO: stderr: ""
Sep  9 19:42:30.337: INFO: stdout: ""
Sep  9 19:42:30.337: INFO: update-demo-nautilus-4cfd8 is created but not running
Sep  9 19:42:35.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2x6r5'
Sep  9 19:42:35.431: INFO: stderr: ""
Sep  9 19:42:35.431: INFO: stdout: "update-demo-nautilus-4cfd8 update-demo-nautilus-wjwh9 "
Sep  9 19:42:35.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4cfd8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2x6r5'
Sep  9 19:42:35.548: INFO: stderr: ""
Sep  9 19:42:35.548: INFO: stdout: "true"
Sep  9 19:42:35.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4cfd8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2x6r5'
Sep  9 19:42:35.638: INFO: stderr: ""
Sep  9 19:42:35.638: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Sep  9 19:42:35.638: INFO: validating pod update-demo-nautilus-4cfd8
Sep  9 19:42:35.642: INFO: got data: {
  "image": "nautilus.jpg"
}

Sep  9 19:42:35.642: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Sep  9 19:42:35.642: INFO: update-demo-nautilus-4cfd8 is verified up and running
Sep  9 19:42:35.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wjwh9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2x6r5'
Sep  9 19:42:35.740: INFO: stderr: ""
Sep  9 19:42:35.740: INFO: stdout: "true"
Sep  9 19:42:35.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wjwh9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2x6r5'
Sep  9 19:42:35.841: INFO: stderr: ""
Sep  9 19:42:35.841: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Sep  9 19:42:35.841: INFO: validating pod update-demo-nautilus-wjwh9
Sep  9 19:42:35.846: INFO: got data: {
  "image": "nautilus.jpg"
}

Sep  9 19:42:35.846: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Sep  9 19:42:35.846: INFO: update-demo-nautilus-wjwh9 is verified up and running
STEP: scaling down the replication controller
Sep  9 19:42:35.848: INFO: scanned /root for discovery docs: 
Sep  9 19:42:35.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-2x6r5'
Sep  9 19:42:36.983: INFO: stderr: ""
Sep  9 19:42:36.983: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Sep  9 19:42:36.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2x6r5'
Sep  9 19:42:37.098: INFO: stderr: ""
Sep  9 19:42:37.098: INFO: stdout: "update-demo-nautilus-4cfd8 update-demo-nautilus-wjwh9 "
STEP: Replicas for name=update-demo: expected=1 actual=2
Sep  9 19:42:42.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2x6r5'
Sep  9 19:42:42.209: INFO: stderr: ""
Sep  9 19:42:42.209: INFO: stdout: "update-demo-nautilus-wjwh9 "
Sep  9 19:42:42.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wjwh9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2x6r5'
Sep  9 19:42:42.308: INFO: stderr: ""
Sep  9 19:42:42.308: INFO: stdout: "true"
Sep  9 19:42:42.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wjwh9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2x6r5'
Sep  9 19:42:42.405: INFO: stderr: ""
Sep  9 19:42:42.405: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Sep  9 19:42:42.405: INFO: validating pod update-demo-nautilus-wjwh9
Sep  9 19:42:42.408: INFO: got data: {
  "image": "nautilus.jpg"
}

Sep  9 19:42:42.408: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Sep  9 19:42:42.408: INFO: update-demo-nautilus-wjwh9 is verified up and running
STEP: scaling up the replication controller
Sep  9 19:42:42.410: INFO: scanned /root for discovery docs: 
Sep  9 19:42:42.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-2x6r5'
Sep  9 19:42:43.559: INFO: stderr: ""
Sep  9 19:42:43.559: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Sep  9 19:42:43.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2x6r5'
Sep  9 19:42:43.655: INFO: stderr: ""
Sep  9 19:42:43.655: INFO: stdout: "update-demo-nautilus-bmnm2 update-demo-nautilus-wjwh9 "
Sep  9 19:42:43.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bmnm2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2x6r5'
Sep  9 19:42:43.748: INFO: stderr: ""
Sep  9 19:42:43.748: INFO: stdout: ""
Sep  9 19:42:43.749: INFO: update-demo-nautilus-bmnm2 is created but not running
Sep  9 19:42:48.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2x6r5'
Sep  9 19:42:48.855: INFO: stderr: ""
Sep  9 19:42:48.855: INFO: stdout: "update-demo-nautilus-bmnm2 update-demo-nautilus-wjwh9 "
Sep  9 19:42:48.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bmnm2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2x6r5'
Sep  9 19:42:48.956: INFO: stderr: ""
Sep  9 19:42:48.956: INFO: stdout: "true"
Sep  9 19:42:48.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bmnm2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2x6r5'
Sep  9 19:42:49.046: INFO: stderr: ""
Sep  9 19:42:49.046: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Sep  9 19:42:49.046: INFO: validating pod update-demo-nautilus-bmnm2
Sep  9 19:42:49.049: INFO: got data: {
  "image": "nautilus.jpg"
}

Sep  9 19:42:49.049: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Sep  9 19:42:49.049: INFO: update-demo-nautilus-bmnm2 is verified up and running
Sep  9 19:42:49.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wjwh9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2x6r5'
Sep  9 19:42:49.149: INFO: stderr: ""
Sep  9 19:42:49.149: INFO: stdout: "true"
Sep  9 19:42:49.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wjwh9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2x6r5'
Sep  9 19:42:49.241: INFO: stderr: ""
Sep  9 19:42:49.241: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Sep  9 19:42:49.241: INFO: validating pod update-demo-nautilus-wjwh9
Sep  9 19:42:49.245: INFO: got data: {
  "image": "nautilus.jpg"
}

Sep  9 19:42:49.245: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Sep  9 19:42:49.245: INFO: update-demo-nautilus-wjwh9 is verified up and running
STEP: using delete to clean up resources
Sep  9 19:42:49.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-2x6r5'
Sep  9 19:42:49.362: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Sep  9 19:42:49.363: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Sep  9 19:42:49.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-2x6r5'
Sep  9 19:42:49.569: INFO: stderr: "No resources found.\n"
Sep  9 19:42:49.569: INFO: stdout: ""
Sep  9 19:42:49.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-2x6r5 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Sep  9 19:42:49.679: INFO: stderr: ""
Sep  9 19:42:49.679: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:42:49.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-2x6r5" for this suite.
Sep  9 19:42:55.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:42:55.834: INFO: namespace: e2e-tests-kubectl-2x6r5, resource: bindings, ignored listing per whitelist
Sep  9 19:42:55.889: INFO: namespace e2e-tests-kubectl-2x6r5 deletion completed in 6.205998377s

• [SLOW TEST:28.467 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:42:55.889: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-9qdb
STEP: Creating a pod to test atomic-volume-subpath
Sep  9 19:42:56.035: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-9qdb" in namespace "e2e-tests-subpath-4vfrx" to be "success or failure"
Sep  9 19:42:56.039: INFO: Pod "pod-subpath-test-secret-9qdb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.561396ms
Sep  9 19:42:58.043: INFO: Pod "pod-subpath-test-secret-9qdb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007728631s
Sep  9 19:43:00.046: INFO: Pod "pod-subpath-test-secret-9qdb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010905022s
Sep  9 19:43:02.051: INFO: Pod "pod-subpath-test-secret-9qdb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015166763s
Sep  9 19:43:04.055: INFO: Pod "pod-subpath-test-secret-9qdb": Phase="Running", Reason="", readiness=false. Elapsed: 8.019768187s
Sep  9 19:43:06.059: INFO: Pod "pod-subpath-test-secret-9qdb": Phase="Running", Reason="", readiness=false. Elapsed: 10.023970507s
Sep  9 19:43:08.064: INFO: Pod "pod-subpath-test-secret-9qdb": Phase="Running", Reason="", readiness=false. Elapsed: 12.028242534s
Sep  9 19:43:10.068: INFO: Pod "pod-subpath-test-secret-9qdb": Phase="Running", Reason="", readiness=false. Elapsed: 14.032801789s
Sep  9 19:43:12.073: INFO: Pod "pod-subpath-test-secret-9qdb": Phase="Running", Reason="", readiness=false. Elapsed: 16.037215409s
Sep  9 19:43:14.076: INFO: Pod "pod-subpath-test-secret-9qdb": Phase="Running", Reason="", readiness=false. Elapsed: 18.040878938s
Sep  9 19:43:16.080: INFO: Pod "pod-subpath-test-secret-9qdb": Phase="Running", Reason="", readiness=false. Elapsed: 20.044842466s
Sep  9 19:43:18.085: INFO: Pod "pod-subpath-test-secret-9qdb": Phase="Running", Reason="", readiness=false. Elapsed: 22.049170939s
Sep  9 19:43:20.089: INFO: Pod "pod-subpath-test-secret-9qdb": Phase="Running", Reason="", readiness=false. Elapsed: 24.053607862s
Sep  9 19:43:22.094: INFO: Pod "pod-subpath-test-secret-9qdb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.058040204s
STEP: Saw pod success
Sep  9 19:43:22.094: INFO: Pod "pod-subpath-test-secret-9qdb" satisfied condition "success or failure"
Sep  9 19:43:22.097: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-secret-9qdb container test-container-subpath-secret-9qdb: 
STEP: delete the pod
Sep  9 19:43:22.132: INFO: Waiting for pod pod-subpath-test-secret-9qdb to disappear
Sep  9 19:43:22.190: INFO: Pod pod-subpath-test-secret-9qdb no longer exists
STEP: Deleting pod pod-subpath-test-secret-9qdb
Sep  9 19:43:22.190: INFO: Deleting pod "pod-subpath-test-secret-9qdb" in namespace "e2e-tests-subpath-4vfrx"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:43:22.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-4vfrx" for this suite.
Sep  9 19:43:28.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:43:28.286: INFO: namespace: e2e-tests-subpath-4vfrx, resource: bindings, ignored listing per whitelist
Sep  9 19:43:28.293: INFO: namespace e2e-tests-subpath-4vfrx deletion completed in 6.097761595s

• [SLOW TEST:32.404 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:43:28.294: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Sep  9 19:43:28.407: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bbac9afa-f2d4-11ea-88c2-0242ac110007" in namespace "e2e-tests-downward-api-pvkpx" to be "success or failure"
Sep  9 19:43:28.441: INFO: Pod "downwardapi-volume-bbac9afa-f2d4-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 34.061301ms
Sep  9 19:43:30.444: INFO: Pod "downwardapi-volume-bbac9afa-f2d4-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036921096s
Sep  9 19:43:32.448: INFO: Pod "downwardapi-volume-bbac9afa-f2d4-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041147328s
STEP: Saw pod success
Sep  9 19:43:32.448: INFO: Pod "downwardapi-volume-bbac9afa-f2d4-11ea-88c2-0242ac110007" satisfied condition "success or failure"
Sep  9 19:43:32.451: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-bbac9afa-f2d4-11ea-88c2-0242ac110007 container client-container: 
STEP: delete the pod
Sep  9 19:43:32.527: INFO: Waiting for pod downwardapi-volume-bbac9afa-f2d4-11ea-88c2-0242ac110007 to disappear
Sep  9 19:43:32.543: INFO: Pod downwardapi-volume-bbac9afa-f2d4-11ea-88c2-0242ac110007 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:43:32.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-pvkpx" for this suite.
Sep  9 19:43:38.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:43:38.591: INFO: namespace: e2e-tests-downward-api-pvkpx, resource: bindings, ignored listing per whitelist
Sep  9 19:43:38.636: INFO: namespace e2e-tests-downward-api-pvkpx deletion completed in 6.088655436s

• [SLOW TEST:10.342 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:43:38.637: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Sep  9 19:43:38.715: INFO: Waiting up to 5m0s for pod "pod-c1d3bb6d-f2d4-11ea-88c2-0242ac110007" in namespace "e2e-tests-emptydir-9whbc" to be "success or failure"
Sep  9 19:43:38.739: INFO: Pod "pod-c1d3bb6d-f2d4-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 23.969288ms
Sep  9 19:43:40.750: INFO: Pod "pod-c1d3bb6d-f2d4-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034709963s
Sep  9 19:43:42.754: INFO: Pod "pod-c1d3bb6d-f2d4-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038571172s
STEP: Saw pod success
Sep  9 19:43:42.754: INFO: Pod "pod-c1d3bb6d-f2d4-11ea-88c2-0242ac110007" satisfied condition "success or failure"
Sep  9 19:43:42.757: INFO: Trying to get logs from node hunter-worker2 pod pod-c1d3bb6d-f2d4-11ea-88c2-0242ac110007 container test-container: 
STEP: delete the pod
Sep  9 19:43:42.774: INFO: Waiting for pod pod-c1d3bb6d-f2d4-11ea-88c2-0242ac110007 to disappear
Sep  9 19:43:42.779: INFO: Pod pod-c1d3bb6d-f2d4-11ea-88c2-0242ac110007 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:43:42.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-9whbc" for this suite.
Sep  9 19:43:48.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:43:48.889: INFO: namespace: e2e-tests-emptydir-9whbc, resource: bindings, ignored listing per whitelist
Sep  9 19:43:48.902: INFO: namespace e2e-tests-emptydir-9whbc deletion completed in 6.100599465s

• [SLOW TEST:10.265 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Sep  9 19:43:48.902: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-c7fc7c9a-f2d4-11ea-88c2-0242ac110007
STEP: Creating a pod to test consume secrets
Sep  9 19:43:49.139: INFO: Waiting up to 5m0s for pod "pod-secrets-c80734f5-f2d4-11ea-88c2-0242ac110007" in namespace "e2e-tests-secrets-t4cfb" to be "success or failure"
Sep  9 19:43:49.160: INFO: Pod "pod-secrets-c80734f5-f2d4-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 21.841275ms
Sep  9 19:43:51.165: INFO: Pod "pod-secrets-c80734f5-f2d4-11ea-88c2-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026120598s
Sep  9 19:43:53.168: INFO: Pod "pod-secrets-c80734f5-f2d4-11ea-88c2-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029874831s
STEP: Saw pod success
Sep  9 19:43:53.169: INFO: Pod "pod-secrets-c80734f5-f2d4-11ea-88c2-0242ac110007" satisfied condition "success or failure"
Sep  9 19:43:53.172: INFO: Trying to get logs from node hunter-worker pod pod-secrets-c80734f5-f2d4-11ea-88c2-0242ac110007 container secret-volume-test: 
STEP: delete the pod
Sep  9 19:43:53.205: INFO: Waiting for pod pod-secrets-c80734f5-f2d4-11ea-88c2-0242ac110007 to disappear
Sep  9 19:43:53.214: INFO: Pod pod-secrets-c80734f5-f2d4-11ea-88c2-0242ac110007 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Sep  9 19:43:53.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-t4cfb" for this suite.
Sep  9 19:43:59.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:43:59.281: INFO: namespace: e2e-tests-secrets-t4cfb, resource: bindings, ignored listing per whitelist
Sep  9 19:43:59.312: INFO: namespace e2e-tests-secrets-t4cfb deletion completed in 6.094306494s
STEP: Destroying namespace "e2e-tests-secret-namespace-6d46c" for this suite.
Sep  9 19:44:05.360: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  9 19:44:05.382: INFO: namespace: e2e-tests-secret-namespace-6d46c, resource: bindings, ignored listing per whitelist
Sep  9 19:44:05.427: INFO: namespace e2e-tests-secret-namespace-6d46c deletion completed in 6.115333991s

• [SLOW TEST:16.525 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSep  9 19:44:05.427: INFO: Running AfterSuite actions on all nodes
Sep  9 19:44:05.427: INFO: Running AfterSuite actions on node 1
Sep  9 19:44:05.427: INFO: Skipping dumping logs from cluster

Ran 200 of 2164 Specs in 6463.760 seconds
SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1964 Skipped PASS