I1216 10:47:16.323541 8 e2e.go:224] Starting e2e run "6c4be561-1ff1-11ea-9388-0242ac110004" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1576493235 - Will randomize all specs Will run 201 of 2164 specs Dec 16 10:47:16.802: INFO: >>> kubeConfig: /root/.kube/config Dec 16 10:47:16.814: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Dec 16 10:47:16.882: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Dec 16 10:47:17.136: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Dec 16 10:47:17.136: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Dec 16 10:47:17.136: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Dec 16 10:47:17.176: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Dec 16 10:47:17.176: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Dec 16 10:47:17.176: INFO: e2e test version: v1.13.12 Dec 16 10:47:17.196: INFO: kube-apiserver version: v1.13.8 SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 10:47:17.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets Dec 16 10:47:17.618: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-6db48dda-1ff1-11ea-9388-0242ac110004 STEP: Creating a pod to test consume secrets Dec 16 10:47:17.654: INFO: Waiting up to 5m0s for pod "pod-secrets-6db5b841-1ff1-11ea-9388-0242ac110004" in namespace "e2e-tests-secrets-jlspb" to be "success or failure" Dec 16 10:47:17.688: INFO: Pod "pod-secrets-6db5b841-1ff1-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 32.992553ms Dec 16 10:47:19.990: INFO: Pod "pod-secrets-6db5b841-1ff1-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.335144813s Dec 16 10:47:22.013: INFO: Pod "pod-secrets-6db5b841-1ff1-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.358494919s Dec 16 10:47:24.025: INFO: Pod "pod-secrets-6db5b841-1ff1-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.370069208s Dec 16 10:47:26.199: INFO: Pod "pod-secrets-6db5b841-1ff1-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.544293977s Dec 16 10:47:28.212: INFO: Pod "pod-secrets-6db5b841-1ff1-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.55723496s Dec 16 10:47:30.228: INFO: Pod "pod-secrets-6db5b841-1ff1-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.573309927s STEP: Saw pod success Dec 16 10:47:30.228: INFO: Pod "pod-secrets-6db5b841-1ff1-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 10:47:30.233: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-6db5b841-1ff1-11ea-9388-0242ac110004 container secret-volume-test: STEP: delete the pod Dec 16 10:47:30.502: INFO: Waiting for pod pod-secrets-6db5b841-1ff1-11ea-9388-0242ac110004 to disappear Dec 16 10:47:31.442: INFO: Pod pod-secrets-6db5b841-1ff1-11ea-9388-0242ac110004 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 10:47:31.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-jlspb" for this suite. Dec 16 10:47:37.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 10:47:37.941: INFO: namespace: e2e-tests-secrets-jlspb, resource: bindings, ignored listing per whitelist Dec 16 10:47:38.027: INFO: namespace e2e-tests-secrets-jlspb deletion completed in 6.550430956s • [SLOW TEST:20.831 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 10:47:38.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Dec 16 10:47:38.262: INFO: Waiting up to 5m0s for pod "pod-79fd077b-1ff1-11ea-9388-0242ac110004" in namespace "e2e-tests-emptydir-sdt9d" to be "success or failure" Dec 16 10:47:38.284: INFO: Pod "pod-79fd077b-1ff1-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 22.169502ms Dec 16 10:47:40.298: INFO: Pod "pod-79fd077b-1ff1-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036250397s Dec 16 10:47:42.334: INFO: Pod "pod-79fd077b-1ff1-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072101118s Dec 16 10:47:44.419: INFO: Pod "pod-79fd077b-1ff1-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.156741276s Dec 16 10:47:46.762: INFO: Pod "pod-79fd077b-1ff1-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.500219526s Dec 16 10:47:49.027: INFO: Pod "pod-79fd077b-1ff1-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.76468175s STEP: Saw pod success Dec 16 10:47:49.027: INFO: Pod "pod-79fd077b-1ff1-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 10:47:49.042: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-79fd077b-1ff1-11ea-9388-0242ac110004 container test-container: STEP: delete the pod Dec 16 10:47:49.624: INFO: Waiting for pod pod-79fd077b-1ff1-11ea-9388-0242ac110004 to disappear Dec 16 10:47:49.638: INFO: Pod pod-79fd077b-1ff1-11ea-9388-0242ac110004 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 10:47:49.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-sdt9d" for this suite. Dec 16 10:47:55.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 10:47:55.792: INFO: namespace: e2e-tests-emptydir-sdt9d, resource: bindings, ignored listing per whitelist Dec 16 10:47:55.912: INFO: namespace e2e-tests-emptydir-sdt9d deletion completed in 6.255691805s • [SLOW TEST:17.884 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 10:47:55.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-mzzx STEP: Creating a pod to test atomic-volume-subpath Dec 16 10:47:56.177: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-mzzx" in namespace "e2e-tests-subpath-jg4dn" to be "success or failure" Dec 16 10:47:56.212: INFO: Pod "pod-subpath-test-configmap-mzzx": Phase="Pending", Reason="", readiness=false. Elapsed: 35.340456ms Dec 16 10:47:58.245: INFO: Pod "pod-subpath-test-configmap-mzzx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067969586s Dec 16 10:48:00.280: INFO: Pod "pod-subpath-test-configmap-mzzx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103458716s Dec 16 10:48:02.703: INFO: Pod "pod-subpath-test-configmap-mzzx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.526252848s Dec 16 10:48:04.716: INFO: Pod "pod-subpath-test-configmap-mzzx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.539594108s Dec 16 10:48:06.734: INFO: Pod "pod-subpath-test-configmap-mzzx": Phase="Pending", Reason="", readiness=false. Elapsed: 10.557421213s Dec 16 10:48:09.315: INFO: Pod "pod-subpath-test-configmap-mzzx": Phase="Pending", Reason="", readiness=false. Elapsed: 13.138334756s Dec 16 10:48:11.332: INFO: Pod "pod-subpath-test-configmap-mzzx": Phase="Pending", Reason="", readiness=false. Elapsed: 15.155298423s Dec 16 10:48:13.357: INFO: Pod "pod-subpath-test-configmap-mzzx": Phase="Running", Reason="", readiness=false. Elapsed: 17.180416014s Dec 16 10:48:15.379: INFO: Pod "pod-subpath-test-configmap-mzzx": Phase="Running", Reason="", readiness=false. Elapsed: 19.202317306s Dec 16 10:48:17.390: INFO: Pod "pod-subpath-test-configmap-mzzx": Phase="Running", Reason="", readiness=false. Elapsed: 21.213209162s Dec 16 10:48:19.427: INFO: Pod "pod-subpath-test-configmap-mzzx": Phase="Running", Reason="", readiness=false. Elapsed: 23.250580546s Dec 16 10:48:21.445: INFO: Pod "pod-subpath-test-configmap-mzzx": Phase="Running", Reason="", readiness=false. Elapsed: 25.268614623s Dec 16 10:48:23.470: INFO: Pod "pod-subpath-test-configmap-mzzx": Phase="Running", Reason="", readiness=false. Elapsed: 27.293425452s Dec 16 10:48:25.500: INFO: Pod "pod-subpath-test-configmap-mzzx": Phase="Running", Reason="", readiness=false. Elapsed: 29.323284707s Dec 16 10:48:27.533: INFO: Pod "pod-subpath-test-configmap-mzzx": Phase="Running", Reason="", readiness=false. Elapsed: 31.355759584s Dec 16 10:48:29.766: INFO: Pod "pod-subpath-test-configmap-mzzx": Phase="Running", Reason="", readiness=false. Elapsed: 33.589619195s Dec 16 10:48:31.786: INFO: Pod "pod-subpath-test-configmap-mzzx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.608780893s STEP: Saw pod success Dec 16 10:48:31.786: INFO: Pod "pod-subpath-test-configmap-mzzx" satisfied condition "success or failure" Dec 16 10:48:31.800: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-mzzx container test-container-subpath-configmap-mzzx: STEP: delete the pod Dec 16 10:48:32.008: INFO: Waiting for pod pod-subpath-test-configmap-mzzx to disappear Dec 16 10:48:32.111: INFO: Pod pod-subpath-test-configmap-mzzx no longer exists STEP: Deleting pod pod-subpath-test-configmap-mzzx Dec 16 10:48:32.111: INFO: Deleting pod "pod-subpath-test-configmap-mzzx" in namespace "e2e-tests-subpath-jg4dn" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 10:48:32.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-jg4dn" for this suite. Dec 16 10:48:38.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 10:48:38.318: INFO: namespace: e2e-tests-subpath-jg4dn, resource: bindings, ignored listing per whitelist Dec 16 10:48:38.488: INFO: namespace e2e-tests-subpath-jg4dn deletion completed in 6.331307277s • [SLOW TEST:42.576 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 10:48:38.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-k8spt I1216 10:48:38.881988 8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-k8spt, replica count: 1 I1216 10:48:39.933343 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1216 10:48:40.934085 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1216 10:48:41.934658 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1216 10:48:42.935202 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1216 10:48:43.936014 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1216 10:48:44.936853 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1216 10:48:45.937432 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1216 10:48:46.938488 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1216 10:48:47.939590 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1216 10:48:48.940546 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1216 10:48:49.941434 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Dec 16 10:48:50.145: INFO: Created: latency-svc-cz8nb Dec 16 10:48:50.246: INFO: Got endpoints: latency-svc-cz8nb [204.299858ms] Dec 16 10:48:50.320: INFO: Created: latency-svc-79bnf Dec 16 10:48:50.433: INFO: Got endpoints: latency-svc-79bnf [186.774955ms] Dec 16 10:48:50.463: INFO: Created: latency-svc-r9pz4 Dec 16 10:48:50.503: INFO: Got endpoints: latency-svc-r9pz4 [255.096475ms] Dec 16 10:48:50.746: INFO: Created: latency-svc-bw92d Dec 16 10:48:50.792: INFO: Got endpoints: latency-svc-bw92d [544.678213ms] Dec 16 10:48:51.058: INFO: Created: latency-svc-pglrq Dec 16 10:48:51.069: INFO: Got endpoints: latency-svc-pglrq [821.826051ms] Dec 16 10:48:51.259: INFO: Created: latency-svc-5xz2m Dec 16 10:48:51.277: INFO: Got endpoints: latency-svc-5xz2m [1.02855034s] Dec 16 10:48:51.338: INFO: Created: latency-svc-8nkd5 Dec 16 10:48:51.494: INFO: Got endpoints: latency-svc-8nkd5 [1.244920192s] Dec 16 10:48:51.520: INFO: Created: latency-svc-bfxmg Dec 16 10:48:51.544: INFO: Got endpoints: latency-svc-bfxmg [1.295494552s] Dec 16 10:48:51.723: INFO: Created: latency-svc-zkxlq Dec 16 10:48:51.734: INFO: Got endpoints: latency-svc-zkxlq [1.485745518s] Dec 16 10:48:51.966: INFO: Created: latency-svc-xsc47 Dec 16 10:48:51.993: INFO: Got endpoints: latency-svc-xsc47 [1.744512743s] Dec 16 10:48:52.157: INFO: Created: latency-svc-grhg2 Dec 16 10:48:52.347: INFO: Created: latency-svc-5hblb Dec 16 10:48:52.378: INFO: Got endpoints: latency-svc-grhg2 [2.131462919s] Dec 16 10:48:52.397: INFO: Got endpoints: latency-svc-5hblb [2.14786387s] Dec 16 10:48:52.423: INFO: Created: latency-svc-gd6wj Dec 16 10:48:52.547: INFO: Got endpoints: latency-svc-gd6wj [2.299087092s] Dec 16 10:48:52.768: INFO: Created: latency-svc-r42kj Dec 16 10:48:52.800: INFO: Got endpoints: latency-svc-r42kj [2.551274483s] Dec 16 10:48:52.865: INFO: Created: latency-svc-7mtlg Dec 16 10:48:52.960: INFO: Got endpoints: latency-svc-7mtlg [2.711572688s] Dec 16 10:48:52.985: INFO: Created: latency-svc-q6pmf Dec 16 10:48:52.999: INFO: Got endpoints: latency-svc-q6pmf [2.751686116s] Dec 16 10:48:53.053: INFO: Created: latency-svc-kjm8l Dec 16 10:48:53.170: INFO: Got endpoints: latency-svc-kjm8l [2.736525976s] Dec 16 10:48:53.290: INFO: Created: latency-svc-qclt9 Dec 16 10:48:53.392: INFO: Got endpoints: latency-svc-qclt9 [2.888906749s] Dec 16 10:48:53.431: INFO: Created: latency-svc-d4ml2 Dec 16 10:48:53.479: INFO: Got endpoints: latency-svc-d4ml2 [2.686843753s] Dec 16 10:48:53.633: INFO: Created: latency-svc-p9ch6 Dec 16 10:48:53.671: INFO: Got endpoints: latency-svc-p9ch6 [2.60165349s] Dec 16 10:48:54.028: INFO: Created: latency-svc-pdgn8 Dec 16 10:48:54.047: INFO: Got endpoints: latency-svc-pdgn8 [2.769407291s] Dec 16 10:48:54.333: INFO: Created: latency-svc-lqlf2 Dec 16 10:48:54.362: INFO: Got endpoints: latency-svc-lqlf2 [2.868056472s] Dec 16 10:48:54.588: INFO: Created: latency-svc-t7z9f Dec 16 10:48:54.623: INFO: Got endpoints: latency-svc-t7z9f [3.078373011s] Dec 16 10:48:54.790: INFO: Created: latency-svc-ljw5j Dec 16 10:48:54.852: INFO: Created: latency-svc-zcfcf Dec 16 10:48:54.856: INFO: Got endpoints: latency-svc-ljw5j [3.121792048s] Dec 16 10:48:55.022: INFO: Got endpoints: latency-svc-zcfcf [3.028792254s] Dec 16 10:48:55.063: INFO: Created: latency-svc-sxc6g Dec 16 10:48:55.076: INFO: Got endpoints: latency-svc-sxc6g [2.697268383s] Dec 16 10:48:55.248: INFO: Created: latency-svc-pnx76 Dec 16 10:48:55.315: INFO: Created: latency-svc-dqd8c Dec 16 10:48:55.315: INFO: Got endpoints: latency-svc-pnx76 [2.917794025s] Dec 16 10:48:55.331: INFO: Got endpoints: latency-svc-dqd8c [2.783083841s] Dec 16 10:48:55.488: INFO: Created: latency-svc-j9dkm Dec 16 10:48:55.733: INFO: Got endpoints: latency-svc-j9dkm [2.932403264s] Dec 16 10:48:55.767: INFO: Created: latency-svc-xhm4h Dec 16 10:48:55.777: INFO: Got endpoints: latency-svc-xhm4h [2.816252408s] Dec 16 10:48:55.995: INFO: Created: latency-svc-d8t82 Dec 16 10:48:56.009: INFO: Got endpoints: latency-svc-d8t82 [3.00985461s] Dec 16 10:48:56.162: INFO: Created: latency-svc-wb5st Dec 16 10:48:56.198: INFO: Got endpoints: latency-svc-wb5st [3.026603225s] Dec 16 10:48:56.278: INFO: Created: latency-svc-llx66 Dec 16 10:48:56.405: INFO: Got endpoints: latency-svc-llx66 [3.012557783s] Dec 16 10:48:56.456: INFO: Created: latency-svc-t2vgq Dec 16 10:48:56.479: INFO: Got endpoints: latency-svc-t2vgq [2.998663247s] Dec 16 10:48:56.735: INFO: Created: latency-svc-hdcj8 Dec 16 10:48:56.777: INFO: Got endpoints: latency-svc-hdcj8 [3.105649189s] Dec 16 10:48:56.962: INFO: Created: latency-svc-665bx Dec 16 10:48:56.983: INFO: Created: latency-svc-mqrfw Dec 16 10:48:56.990: INFO: Got endpoints: latency-svc-665bx [2.942461472s] Dec 16 10:48:57.016: INFO: Got endpoints: latency-svc-mqrfw [2.653184328s] Dec 16 10:48:57.176: INFO: Created: latency-svc-zjjnc Dec 16 10:48:57.205: INFO: Got endpoints: latency-svc-zjjnc [2.581889762s] Dec 16 10:48:57.336: INFO: Created: latency-svc-gr64v Dec 16 10:48:57.362: INFO: Got endpoints: latency-svc-gr64v [2.505442s] Dec 16 10:48:57.584: INFO: Created: latency-svc-24zv9 Dec 16 10:48:57.599: INFO: Got endpoints: latency-svc-24zv9 [2.575944327s] Dec 16 10:48:57.833: INFO: Created: latency-svc-shhks Dec 16 10:48:57.875: INFO: Got endpoints: latency-svc-shhks [2.798817126s] Dec 16 10:48:58.045: INFO: Created: latency-svc-xrwpr Dec 16 10:48:58.065: INFO: Got endpoints: latency-svc-xrwpr [2.750312252s] Dec 16 10:48:58.120: INFO: Created: latency-svc-6l8s9 Dec 16 10:48:58.227: INFO: Got endpoints: latency-svc-6l8s9 [2.895716709s] Dec 16 10:48:58.259: INFO: Created: latency-svc-6lfk8 Dec 16 10:48:58.285: INFO: Got endpoints: latency-svc-6lfk8 [2.552062436s] Dec 16 10:48:58.467: INFO: Created: latency-svc-24jgj Dec 16 10:48:58.495: INFO: Got endpoints: latency-svc-24jgj [2.717485139s] Dec 16 10:48:58.731: INFO: Created: latency-svc-bj2jh Dec 16 10:48:58.933: INFO: Got endpoints: latency-svc-bj2jh [2.924449107s] Dec 16 10:48:58.967: INFO: Created: latency-svc-kwkbp Dec 16 10:48:59.001: INFO: Got endpoints: latency-svc-kwkbp [2.802373067s] Dec 16 10:48:59.220: INFO: Created: latency-svc-hxwnk Dec 16 10:48:59.220: INFO: Got endpoints: latency-svc-hxwnk [2.814626225s] Dec 16 10:48:59.367: INFO: Created: latency-svc-bqqs6 Dec 16 10:48:59.380: INFO: Got endpoints: latency-svc-bqqs6 [2.901110347s] Dec 16 10:48:59.446: INFO: Created: latency-svc-7t7rd Dec 16 10:48:59.539: INFO: Got endpoints: latency-svc-7t7rd [2.761669849s] Dec 16 10:48:59.564: INFO: Created: latency-svc-9c4bs Dec 16 10:48:59.583: INFO: Got endpoints: latency-svc-9c4bs [2.592826833s] Dec 16 10:48:59.779: INFO: Created: latency-svc-scp9n Dec 16 10:48:59.818: INFO: Got endpoints: latency-svc-scp9n [2.801312213s] Dec 16 10:49:00.010: INFO: Created: latency-svc-wks72 Dec 16 10:49:00.034: INFO: Got endpoints: latency-svc-wks72 [2.829188564s] Dec 16 10:49:00.185: INFO: Created: latency-svc-wkm4z Dec 16 10:49:00.209: INFO: Got endpoints: latency-svc-wkm4z [2.846398502s] Dec 16 10:49:00.273: INFO: Created: latency-svc-vllcw Dec 16 10:49:00.416: INFO: Created: latency-svc-rj9hv Dec 16 10:49:00.659: INFO: Got endpoints: latency-svc-vllcw [3.059991043s] Dec 16 10:49:00.669: INFO: Created: latency-svc-6tkq9 Dec 16 10:49:00.695: INFO: Got endpoints: latency-svc-6tkq9 [2.628776611s] Dec 16 10:49:00.711: INFO: Got endpoints: latency-svc-rj9hv [2.835138261s] Dec 16 10:49:00.840: INFO: Created: latency-svc-gvwbd Dec 16 10:49:00.893: INFO: Got endpoints: latency-svc-gvwbd [2.665070985s] Dec 16 10:49:00.985: INFO: Created: latency-svc-8k67c Dec 16 10:49:01.019: INFO: Got endpoints: latency-svc-8k67c [2.733401648s] Dec 16 10:49:01.210: INFO: Created: latency-svc-rgktf Dec 16 10:49:01.223: INFO: Got endpoints: latency-svc-rgktf [2.728003227s] Dec 16 10:49:01.291: INFO: Created: latency-svc-fn588 Dec 16 10:49:01.430: INFO: Got endpoints: latency-svc-fn588 [2.496156749s] Dec 16 10:49:01.444: INFO: Created: latency-svc-f4xxx Dec 16 10:49:01.479: INFO: Got endpoints: latency-svc-f4xxx [2.477542081s] Dec 16 10:49:01.698: INFO: Created: latency-svc-bhk8n Dec 16 10:49:01.726: INFO: Got endpoints: latency-svc-bhk8n [2.506067347s] Dec 16 10:49:01.906: INFO: Created: latency-svc-7vqcx Dec 16 10:49:01.934: INFO: Got endpoints: latency-svc-7vqcx [2.554033005s] Dec 16 10:49:02.125: INFO: Created: latency-svc-kjrrf Dec 16 10:49:02.141: INFO: Got endpoints: latency-svc-kjrrf [2.60198238s] Dec 16 10:49:02.910: INFO: Created: latency-svc-4t448 Dec 16 10:49:02.998: INFO: Got endpoints: latency-svc-4t448 [3.415284388s] Dec 16 10:49:03.071: INFO: Created: latency-svc-qfhmk Dec 16 10:49:03.236: INFO: Got endpoints: latency-svc-qfhmk [3.417463354s] Dec 16 10:49:03.461: INFO: Created: latency-svc-gwv4r Dec 16 10:49:03.469: INFO: Got endpoints: latency-svc-gwv4r [3.434395101s] Dec 16 10:49:03.681: INFO: Created: latency-svc-5wnbl Dec 16 10:49:03.702: INFO: Got endpoints: latency-svc-5wnbl [3.493014509s] Dec 16 10:49:03.775: INFO: Created: latency-svc-x7khk Dec 16 10:49:03.887: INFO: Got endpoints: latency-svc-x7khk [3.227738181s] Dec 16 10:49:04.126: INFO: Created: latency-svc-4cj4p Dec 16 10:49:04.176: INFO: Got endpoints: latency-svc-4cj4p [3.480498193s] Dec 16 10:49:04.185: INFO: Created: latency-svc-46p97 Dec 16 10:49:04.197: INFO: Got endpoints: latency-svc-46p97 [3.486636915s] Dec 16 10:49:04.300: INFO: Created: latency-svc-zbv4f Dec 16 10:49:04.309: INFO: Got endpoints: latency-svc-zbv4f [3.416158471s] Dec 16 10:49:04.342: INFO: Created: latency-svc-sf86p Dec 16 10:49:04.355: INFO: Got endpoints: latency-svc-sf86p [3.335317867s] Dec 16 10:49:04.536: INFO: Created: latency-svc-s7nzt Dec 16 10:49:04.568: INFO: Got endpoints: latency-svc-s7nzt [3.344937938s] Dec 16 10:49:04.765: INFO: Created: latency-svc-jpnpf Dec 16 10:49:04.872: INFO: Got endpoints: latency-svc-jpnpf [3.441487441s] Dec 16 10:49:04.902: INFO: Created: latency-svc-4z56s Dec 16 10:49:04.913: INFO: Got endpoints: latency-svc-4z56s [3.433416121s] Dec 16 10:49:04.999: INFO: Created: latency-svc-6c2qw Dec 16 10:49:05.045: INFO: Got endpoints: latency-svc-6c2qw [3.318792958s] Dec 16 10:49:05.080: INFO: Created: latency-svc-zdh7v Dec 16 10:49:05.084: INFO: Got endpoints: latency-svc-zdh7v [3.149253888s] Dec 16 10:49:05.134: INFO: Created: latency-svc-v7snn Dec 16 10:49:05.251: INFO: Got endpoints: latency-svc-v7snn [3.109590301s] Dec 16 10:49:05.280: INFO: Created: latency-svc-qct7k Dec 16 10:49:05.305: INFO: Got endpoints: latency-svc-qct7k [2.306468075s] Dec 16 10:49:05.458: INFO: Created: latency-svc-qj4t4 Dec 16 10:49:05.481: INFO: Got endpoints: latency-svc-qj4t4 [2.244612049s] Dec 16 10:49:05.773: INFO: Created: latency-svc-7rw4g Dec 16 10:49:05.808: INFO: Got endpoints: latency-svc-7rw4g [2.338491361s] Dec 16 10:49:05.880: INFO: Created: latency-svc-c8p9v Dec 16 10:49:05.961: INFO: Got endpoints: latency-svc-c8p9v [2.259102548s] Dec 16 10:49:05.978: INFO: Created: latency-svc-6cfzs Dec 16 10:49:05.993: INFO: Got endpoints: latency-svc-6cfzs [2.105148859s] Dec 16 10:49:06.159: INFO: Created: latency-svc-b5mr9 Dec 16 10:49:06.191: INFO: Got endpoints: latency-svc-b5mr9 [2.015279038s] Dec 16 10:49:06.219: INFO: Created: latency-svc-2nggw Dec 16 10:49:06.293: INFO: Got endpoints: latency-svc-2nggw [2.095004777s] Dec 16 10:49:06.332: INFO: Created: latency-svc-krlmf Dec 16 10:49:06.343: INFO: Got endpoints: latency-svc-krlmf [151.795263ms] Dec 16 10:49:06.469: INFO: Created: latency-svc-qtb94 Dec 16 10:49:06.497: INFO: Got endpoints: latency-svc-qtb94 [2.187305189s] Dec 16 10:49:06.730: INFO: Created: latency-svc-x2bwg Dec 16 10:49:06.998: INFO: Got endpoints: latency-svc-x2bwg [2.643779966s] Dec 16 10:49:07.014: INFO: Created: latency-svc-wwrwm Dec 16 10:49:07.029: INFO: Got endpoints: latency-svc-wwrwm [2.461113359s] Dec 16 10:49:07.226: INFO: Created: latency-svc-nd22b Dec 16 10:49:07.356: INFO: Got endpoints: latency-svc-nd22b [2.483934114s] Dec 16 10:49:07.426: INFO: Created: latency-svc-48j7d Dec 16 10:49:07.451: INFO: Got endpoints: latency-svc-48j7d [2.53809127s] Dec 16 10:49:07.785: INFO: Created: latency-svc-8kbjf Dec 16 10:49:07.940: INFO: Got endpoints: latency-svc-8kbjf [2.894466328s] Dec 16 10:49:08.006: INFO: Created: latency-svc-cjwg7 Dec 16 10:49:08.184: INFO: Got endpoints: latency-svc-cjwg7 [3.099882261s] Dec 16 10:49:08.222: INFO: Created: latency-svc-h5bw7 Dec 16 10:49:08.247: INFO: Got endpoints: latency-svc-h5bw7 [2.994886336s] Dec 16 10:49:08.416: INFO: Created: latency-svc-gc69z Dec 16 10:49:08.447: INFO: Got endpoints: latency-svc-gc69z [3.141951706s] Dec 16 10:49:08.627: INFO: Created: latency-svc-qh7vl Dec 16 10:49:08.640: INFO: Got endpoints: latency-svc-qh7vl [3.158233343s] Dec 16 10:49:08.726: INFO: Created: latency-svc-snpc5 Dec 16 10:49:08.843: INFO: Got endpoints: latency-svc-snpc5 [3.034199014s] Dec 16 10:49:08.878: INFO: Created: latency-svc-v9rns Dec 16 10:49:08.905: INFO: Got endpoints: latency-svc-v9rns [2.943600363s] Dec 16 10:49:09.100: INFO: Created: latency-svc-5mm9f Dec 16 10:49:09.100: INFO: Got endpoints: latency-svc-5mm9f [3.107014891s] Dec 16 10:49:09.137: INFO: Created: latency-svc-gj5x5 Dec 16 10:49:09.267: INFO: Got endpoints: latency-svc-gj5x5 [2.974535628s] Dec 16 10:49:09.343: INFO: Created: latency-svc-pmxwm Dec 16 10:49:09.442: INFO: Got endpoints: latency-svc-pmxwm [3.098891186s] Dec 16 10:49:09.463: INFO: Created: latency-svc-kjjt9 Dec 16 10:49:09.502: INFO: Got endpoints: latency-svc-kjjt9 [3.005390425s] Dec 16 10:49:09.685: INFO: Created: latency-svc-622ns Dec 16 10:49:09.694: INFO: Got endpoints: latency-svc-622ns [2.69489409s] Dec 16 10:49:09.871: INFO: Created: latency-svc-w29zf Dec 16 10:49:09.922: INFO: Got endpoints: latency-svc-w29zf [2.892426958s] Dec 16 10:49:10.134: INFO: Created: latency-svc-7bk7c Dec 16 10:49:10.161: INFO: Got endpoints: latency-svc-7bk7c [2.804032835s] Dec 16 10:49:10.272: INFO: Created: latency-svc-jnx6l Dec 16 10:49:10.301: INFO: Got endpoints: latency-svc-jnx6l [2.849817036s] Dec 16 10:49:10.357: INFO: Created: latency-svc-6pxkj Dec 16 10:49:10.543: INFO: Got endpoints: latency-svc-6pxkj [2.602675918s] Dec 16 10:49:10.575: INFO: Created: latency-svc-2lr4h Dec 16 10:49:10.631: INFO: Got endpoints: latency-svc-2lr4h [2.447316232s] Dec 16 10:49:10.955: INFO: Created: latency-svc-5rp7p Dec 16 10:49:11.170: INFO: Created: latency-svc-mkjx4 Dec 16 10:49:11.182: INFO: Got endpoints: latency-svc-5rp7p [2.935174878s] Dec 16 10:49:11.198: INFO: Got endpoints: latency-svc-mkjx4 [2.750182259s] Dec 16 10:49:11.414: INFO: Created: latency-svc-q22zz Dec 16 10:49:11.449: INFO: Got endpoints: latency-svc-q22zz [2.809375851s] Dec 16 10:49:11.677: INFO: Created: latency-svc-nzq2d Dec 16 10:49:11.725: INFO: Got endpoints: latency-svc-nzq2d [2.881787254s] Dec 16 10:49:11.913: INFO: Created: latency-svc-vzbhd Dec 16 10:49:11.934: INFO: Got endpoints: latency-svc-vzbhd [3.028940236s] Dec 16 10:49:12.098: INFO: Created: latency-svc-pm7lx Dec 16 10:49:12.108: INFO: Got endpoints: latency-svc-pm7lx [3.008275653s] Dec 16 10:49:12.178: INFO: Created: latency-svc-4vh2v Dec 16 10:49:12.253: INFO: Got endpoints: latency-svc-4vh2v [2.985276952s] Dec 16 10:49:12.299: INFO: Created: latency-svc-9dnlz Dec 16 10:49:12.494: INFO: Got endpoints: latency-svc-9dnlz [3.051491202s] Dec 16 10:49:12.512: INFO: Created: latency-svc-q2cbb Dec 16 10:49:12.513: INFO: Got endpoints: latency-svc-q2cbb [3.010270479s] Dec 16 10:49:12.726: INFO: Created: latency-svc-dj4r7 Dec 16 10:49:12.747: INFO: Got endpoints: latency-svc-dj4r7 [3.052931178s] Dec 16 10:49:12.819: INFO: Created: latency-svc-ndtfn Dec 16 10:49:12.983: INFO: Got endpoints: latency-svc-ndtfn [3.060510338s] Dec 16 10:49:13.048: INFO: Created: latency-svc-rv9nd Dec 16 10:49:13.052: INFO: Got endpoints: latency-svc-rv9nd [2.891374941s] Dec 16 10:49:13.321: INFO: Created: latency-svc-g9wqq Dec 16 10:49:13.323: INFO: Got endpoints: latency-svc-g9wqq [3.022025594s] Dec 16 10:49:13.636: INFO: Created: latency-svc-4d8zj Dec 16 10:49:13.639: INFO: Got endpoints: latency-svc-4d8zj [3.096079075s] Dec 16 10:49:13.906: INFO: Created: latency-svc-tkxzr Dec 16 10:49:14.115: INFO: Got endpoints: latency-svc-tkxzr [3.48244487s] Dec 16 10:49:14.127: INFO: Created: latency-svc-hmh2k Dec 16 10:49:14.158: INFO: Got endpoints: latency-svc-hmh2k [2.975905523s] Dec 16 10:49:14.534: INFO: Created: latency-svc-9pmg6 Dec 16 10:49:14.604: INFO: Got endpoints: latency-svc-9pmg6 [3.405726918s] Dec 16 10:49:14.840: INFO: Created: latency-svc-hxwb9 Dec 16 10:49:14.875: INFO: Got endpoints: latency-svc-hxwb9 [3.425799624s] Dec 16 10:49:14.926: INFO: Created: latency-svc-hzwv2 Dec 16 10:49:15.008: INFO: Got endpoints: latency-svc-hzwv2 [3.2820956s] Dec 16 10:49:15.034: INFO: Created: latency-svc-k74vg Dec 16 10:49:15.037: INFO: Got endpoints: latency-svc-k74vg [3.102452093s] Dec 16 10:49:15.087: INFO: Created: latency-svc-79gzq Dec 16 10:49:15.096: INFO: Got endpoints: latency-svc-79gzq [2.987015698s] Dec 16 10:49:15.261: INFO: Created: latency-svc-dkzqr Dec 16 10:49:15.270: INFO: Got endpoints: latency-svc-dkzqr [3.016345001s] Dec 16 10:49:15.305: INFO: Created: latency-svc-dz9xp Dec 16 10:49:15.329: INFO: Got endpoints: latency-svc-dz9xp [2.833879799s] Dec 16 10:49:15.507: INFO: Created: latency-svc-ldnmb Dec 16 10:49:15.520: INFO: Got endpoints: latency-svc-ldnmb [3.007240227s] Dec 16 10:49:15.756: INFO: Created: latency-svc-94gwg Dec 16 10:49:15.771: INFO: Got endpoints: latency-svc-94gwg [3.023723076s] Dec 16 10:49:15.927: INFO: Created: latency-svc-lbpp7 Dec 16 10:49:15.949: INFO: Got endpoints: latency-svc-lbpp7 [2.965633216s] Dec 16 10:49:16.152: INFO: Created: latency-svc-cldkc Dec 16 10:49:16.191: INFO: Got endpoints: latency-svc-cldkc [3.138294036s] Dec 16 10:49:16.359: INFO: Created: latency-svc-7c99r Dec 16 10:49:16.386: INFO: Got endpoints: latency-svc-7c99r [3.062401141s] Dec 16 10:49:16.827: INFO: Created: latency-svc-z46h9 Dec 16 10:49:16.874: INFO: Got endpoints: latency-svc-z46h9 [3.234948247s] Dec 16 10:49:17.027: INFO: Created: latency-svc-8tmr5 Dec 16 10:49:17.067: INFO: Got endpoints: latency-svc-8tmr5 [2.952037202s] Dec 16 10:49:17.172: INFO: Created: latency-svc-gl58k Dec 16 10:49:17.200: INFO: Got endpoints: latency-svc-gl58k [3.041758397s] Dec 16 10:49:17.354: INFO: Created: latency-svc-nl42d Dec 16 10:49:17.385: INFO: Got endpoints: latency-svc-nl42d [2.781061008s] Dec 16 10:49:17.545: INFO: Created: latency-svc-89n8n Dec 16 10:49:17.560: INFO: Got endpoints: latency-svc-89n8n [2.683915887s] Dec 16 10:49:17.774: INFO: Created: latency-svc-8qnqx Dec 16 10:49:17.775: INFO: Got endpoints: latency-svc-8qnqx [2.766499317s] Dec 16 10:49:18.049: INFO: Created: latency-svc-dlx7b Dec 16 10:49:18.050: INFO: Got endpoints: latency-svc-dlx7b [3.0125697s] Dec 16 10:49:18.242: INFO: Created: latency-svc-npdmd Dec 16 10:49:18.262: INFO: Got endpoints: latency-svc-npdmd [3.166188644s] Dec 16 10:49:18.336: INFO: Created: latency-svc-f8bkk Dec 16 10:49:18.397: INFO: Got endpoints: latency-svc-f8bkk [3.127172192s] Dec 16 10:49:18.438: INFO: Created: latency-svc-n5wfg Dec 16 10:49:18.464: INFO: Got endpoints: latency-svc-n5wfg [3.134592997s] Dec 16 10:49:18.648: INFO: Created: latency-svc-lsd9q Dec 16 10:49:18.691: INFO: Got endpoints: latency-svc-lsd9q [3.16998386s] Dec 16 10:49:18.748: INFO: Created: latency-svc-fmv96 Dec 16 10:49:18.889: INFO: Got endpoints: latency-svc-fmv96 [3.117554939s] Dec 16 10:49:18.916: INFO: Created: latency-svc-qgvn7 Dec 16 10:49:18.941: INFO: Got endpoints: latency-svc-qgvn7 [2.991435667s] Dec 16 10:49:19.124: INFO: Created: latency-svc-rlfmp Dec 16 10:49:19.155: INFO: Got endpoints: latency-svc-rlfmp [2.964383117s] Dec 16 10:49:19.293: INFO: Created: latency-svc-55lhq Dec 16 10:49:19.329: INFO: Got endpoints: latency-svc-55lhq [2.942769116s] Dec 16 10:49:19.389: INFO: Created: latency-svc-hlb79 Dec 16 10:49:19.474: INFO: Got endpoints: latency-svc-hlb79 [2.599297599s] Dec 16 10:49:19.571: INFO: Created: latency-svc-v52tv Dec 16 10:49:19.712: INFO: Got endpoints: latency-svc-v52tv [2.644233429s] Dec 16 10:49:19.759: INFO: Created: latency-svc-5cq94 Dec 16 10:49:19.796: INFO: Got endpoints: latency-svc-5cq94 [2.595449276s] Dec 16 10:49:20.093: INFO: Created: latency-svc-wv7st Dec 16 10:49:20.208: INFO: Got endpoints: latency-svc-wv7st [2.822102901s] Dec 16 10:49:20.689: INFO: Created: latency-svc-fwgzw Dec 16 10:49:20.716: INFO: Got endpoints: latency-svc-fwgzw [3.155500335s] Dec 16 10:49:20.907: INFO: Created: latency-svc-kfngq Dec 16 10:49:20.957: INFO: Got endpoints: latency-svc-kfngq [3.182008404s] Dec 16 10:49:21.221: INFO: Created: latency-svc-4s5fc Dec 16 10:49:21.285: INFO: Got endpoints: latency-svc-4s5fc [3.235087858s] Dec 16 10:49:21.306: INFO: Created: latency-svc-5bxsz Dec 16 10:49:21.556: INFO: Got endpoints: latency-svc-5bxsz [3.293970204s] Dec 16 10:49:22.496: INFO: Created: latency-svc-2qmbx Dec 16 10:49:22.505: INFO: Got endpoints: latency-svc-2qmbx [4.107570824s] Dec 16 10:49:22.760: INFO: Created: latency-svc-8lvsf Dec 16 10:49:22.796: INFO: Got endpoints: latency-svc-8lvsf [4.331998254s] Dec 16 10:49:23.027: INFO: Created: latency-svc-cjbj8 Dec 16 10:49:23.038: INFO: Got endpoints: latency-svc-cjbj8 [4.346724053s] Dec 16 10:49:23.213: INFO: Created: latency-svc-6p9pd Dec 16 10:49:23.225: INFO: Got endpoints: latency-svc-6p9pd [4.335296613s] Dec 16 10:49:23.399: INFO: Created: latency-svc-hz5wx Dec 16 10:49:23.406: INFO: Got endpoints: latency-svc-hz5wx [4.465013321s] Dec 16 10:49:23.451: INFO: Created: latency-svc-nzvvd Dec 16 10:49:23.621: INFO: Created: latency-svc-gkp4f Dec 16 10:49:23.623: INFO: Got endpoints: latency-svc-nzvvd [4.46686156s] Dec 16 10:49:23.681: INFO: Got endpoints: latency-svc-gkp4f [4.351920338s] Dec 16 10:49:23.827: INFO: Created: latency-svc-pttbm Dec 16 10:49:23.839: INFO: Got endpoints: latency-svc-pttbm [4.364393948s] Dec 16 10:49:24.024: INFO: Created: latency-svc-ksk9c Dec 16 10:49:24.049: INFO: Got endpoints: latency-svc-ksk9c [4.336588755s] Dec 16 10:49:24.277: INFO: Created: latency-svc-fxmfd Dec 16 10:49:24.333: INFO: Got endpoints: latency-svc-fxmfd [4.536290549s] Dec 16 10:49:24.446: INFO: Created: latency-svc-b424h Dec 16 10:49:24.477: INFO: Got endpoints: latency-svc-b424h [4.26858566s] Dec 16 10:49:24.677: INFO: Created: latency-svc-d2qxh Dec 16 10:49:24.699: INFO: Got endpoints: latency-svc-d2qxh [3.983019962s] Dec 16 10:49:24.780: INFO: Created: latency-svc-wckbg Dec 16 10:49:24.840: INFO: Got endpoints: latency-svc-wckbg [3.882486644s] Dec 16 10:49:24.916: INFO: Created: latency-svc-jzgpp Dec 16 10:49:24.937: INFO: Got endpoints: latency-svc-jzgpp [3.651163264s] Dec 16 10:49:25.127: INFO: Created: latency-svc-wl8mx Dec 16 10:49:25.150: INFO: Got endpoints: latency-svc-wl8mx [3.593010676s] Dec 16 10:49:25.284: INFO: Created: latency-svc-9w5f9 Dec 16 10:49:25.327: INFO: Got endpoints: latency-svc-9w5f9 [2.821177267s] Dec 16 10:49:25.470: INFO: Created: latency-svc-xwhff Dec 16 10:49:25.491: INFO: Got endpoints: latency-svc-xwhff [2.694565197s] Dec 16 10:49:25.540: INFO: Created: latency-svc-82cbs Dec 16 10:49:25.552: INFO: Got endpoints: latency-svc-82cbs [2.514107879s] Dec 16 10:49:25.769: INFO: Created: latency-svc-dsxrh Dec 16 10:49:25.795: INFO: Got endpoints: latency-svc-dsxrh [2.570626591s] Dec 16 10:49:26.030: INFO: Created: latency-svc-mv2rt Dec 16 10:49:26.049: INFO: Got endpoints: latency-svc-mv2rt [2.642694694s] Dec 16 10:49:26.256: INFO: Created: latency-svc-k729c Dec 16 10:49:26.286: INFO: Got endpoints: latency-svc-k729c [2.66309738s] Dec 16 10:49:26.453: INFO: Created: latency-svc-rktrh Dec 16 10:49:26.482: INFO: Got endpoints: latency-svc-rktrh [2.800145271s] Dec 16 10:49:26.679: INFO: Created: latency-svc-w4jjv Dec 16 10:49:26.700: INFO: Got endpoints: latency-svc-w4jjv [2.860818424s] Dec 16 10:49:26.739: INFO: Created: latency-svc-nng95 Dec 16 10:49:26.758: INFO: Got endpoints: latency-svc-nng95 [2.70871524s] Dec 16 10:49:26.895: INFO: Created: latency-svc-grqp6 Dec 16 10:49:26.907: INFO: Got endpoints: latency-svc-grqp6 [2.573907719s] Dec 16 10:49:27.051: INFO: Created: latency-svc-j7dsf Dec 16 10:49:27.067: INFO: Got endpoints: latency-svc-j7dsf [2.590107868s] Dec 16 10:49:27.205: INFO: Created: latency-svc-rh8ln Dec 16 10:49:27.222: INFO: Got endpoints: latency-svc-rh8ln [2.522360862s] Dec 16 10:49:27.289: INFO: Created: latency-svc-2tb56 Dec 16 10:49:27.393: INFO: Got endpoints: latency-svc-2tb56 [2.552954141s] Dec 16 10:49:27.419: INFO: Created: latency-svc-72zxg Dec 16 10:49:27.448: INFO: Got endpoints: latency-svc-72zxg [2.510904686s] Dec 16 10:49:27.555: INFO: Created: latency-svc-2c6d7 Dec 16 10:49:27.592: INFO: Got endpoints: latency-svc-2c6d7 [2.44162547s] Dec 16 10:49:27.804: INFO: Created: latency-svc-q5487 Dec 16 10:49:27.840: INFO: Got endpoints: latency-svc-q5487 [2.512397873s] Dec 16 10:49:27.872: INFO: Created: latency-svc-gtlh2 Dec 16 10:49:28.029: INFO: Got endpoints: latency-svc-gtlh2 [2.53777292s] Dec 16 10:49:28.058: INFO: Created: latency-svc-qpfsn Dec 16 10:49:28.079: INFO: Got endpoints: latency-svc-qpfsn [2.526829823s] Dec 16 10:49:28.266: INFO: Created: latency-svc-pkw8s Dec 16 10:49:28.298: INFO: Got endpoints: latency-svc-pkw8s [2.502042701s] Dec 16 10:49:28.339: INFO: Created: latency-svc-6rmc4 Dec 16 10:49:28.453: INFO: Got endpoints: latency-svc-6rmc4 [2.403612773s] Dec 16 10:49:28.542: INFO: Created: latency-svc-btzvr Dec 16 10:49:28.678: INFO: Got endpoints: latency-svc-btzvr [2.391545558s] Dec 16 10:49:28.699: INFO: Created: latency-svc-fbxq6 Dec 16 10:49:28.835: INFO: Got endpoints: latency-svc-fbxq6 [2.352740352s] Dec 16 10:49:28.850: INFO: Created: latency-svc-p7wlv Dec 16 10:49:28.878: INFO: Got endpoints: latency-svc-p7wlv [2.177350674s] Dec 16 10:49:29.033: INFO: Created: latency-svc-n5m8w Dec 16 10:49:29.067: INFO: Got endpoints: latency-svc-n5m8w [2.309610512s] Dec 16 10:49:29.113: INFO: Created: latency-svc-4dmv6 Dec 16 10:49:29.199: INFO: Got endpoints: latency-svc-4dmv6 [2.292214529s] Dec 16 10:49:29.200: INFO: Latencies: [151.795263ms 186.774955ms 255.096475ms 544.678213ms 821.826051ms 1.02855034s 1.244920192s 1.295494552s 1.485745518s 1.744512743s 2.015279038s 2.095004777s 2.105148859s 2.131462919s 2.14786387s 2.177350674s 2.187305189s 2.244612049s 2.259102548s 2.292214529s 2.299087092s 2.306468075s 2.309610512s 2.338491361s 2.352740352s 2.391545558s 2.403612773s 2.44162547s 2.447316232s 2.461113359s 2.477542081s 2.483934114s 2.496156749s 2.502042701s 2.505442s 2.506067347s 2.510904686s 2.512397873s 2.514107879s 2.522360862s 2.526829823s 2.53777292s 2.53809127s 2.551274483s 2.552062436s 2.552954141s 2.554033005s 2.570626591s 2.573907719s 2.575944327s 2.581889762s 2.590107868s 2.592826833s 2.595449276s 2.599297599s 2.60165349s 2.60198238s 2.602675918s 2.628776611s 2.642694694s 2.643779966s 2.644233429s 2.653184328s 2.66309738s 2.665070985s 2.683915887s 2.686843753s 2.694565197s 2.69489409s 2.697268383s 2.70871524s 2.711572688s 2.717485139s 2.728003227s 2.733401648s 2.736525976s 2.750182259s 2.750312252s 2.751686116s 2.761669849s 2.766499317s 2.769407291s 2.781061008s 2.783083841s 2.798817126s 2.800145271s 2.801312213s 2.802373067s 2.804032835s 2.809375851s 2.814626225s 2.816252408s 2.821177267s 2.822102901s 2.829188564s 2.833879799s 2.835138261s 2.846398502s 2.849817036s 2.860818424s 2.868056472s 2.881787254s 2.888906749s 2.891374941s 2.892426958s 2.894466328s 2.895716709s 2.901110347s 2.917794025s 2.924449107s 2.932403264s 2.935174878s 2.942461472s 2.942769116s 2.943600363s 2.952037202s 2.964383117s 2.965633216s 2.974535628s 2.975905523s 2.985276952s 2.987015698s 2.991435667s 2.994886336s 2.998663247s 3.005390425s 3.007240227s 3.008275653s 3.00985461s 3.010270479s 3.012557783s 3.0125697s 3.016345001s 3.022025594s 3.023723076s 3.026603225s 3.028792254s 3.028940236s 3.034199014s 3.041758397s 3.051491202s 3.052931178s 3.059991043s 3.060510338s 3.062401141s 3.078373011s 3.096079075s 3.098891186s 3.099882261s 3.102452093s 3.105649189s 3.107014891s 3.109590301s 3.117554939s 3.121792048s 3.127172192s 3.134592997s 3.138294036s 3.141951706s 3.149253888s 3.155500335s 3.158233343s 3.166188644s 3.16998386s 3.182008404s 3.227738181s 3.234948247s 3.235087858s 3.2820956s 3.293970204s 3.318792958s 3.335317867s 3.344937938s 3.405726918s 3.415284388s 3.416158471s 3.417463354s 3.425799624s 3.433416121s 3.434395101s 3.441487441s 3.480498193s 3.48244487s 3.486636915s 3.493014509s 3.593010676s 3.651163264s 3.882486644s 3.983019962s 4.107570824s 4.26858566s 4.331998254s 4.335296613s 4.336588755s 4.346724053s 4.351920338s 4.364393948s 4.465013321s 4.46686156s 4.536290549s] Dec 16 10:49:29.201: INFO: 50 %ile: 2.868056472s Dec 16 10:49:29.201: INFO: 90 %ile: 3.441487441s Dec 16 10:49:29.201: INFO: 99 %ile: 4.46686156s Dec 16 10:49:29.201: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 10:49:29.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-k8spt" for this suite. Dec 16 10:50:27.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 10:50:27.289: INFO: namespace: e2e-tests-svc-latency-k8spt, resource: bindings, ignored listing per whitelist Dec 16 10:50:27.422: INFO: namespace e2e-tests-svc-latency-k8spt deletion completed in 58.210781278s • [SLOW TEST:108.934 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 10:50:27.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-def34b15-1ff1-11ea-9388-0242ac110004 STEP: Creating a pod to test consume secrets Dec 16 10:50:27.735: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-def49026-1ff1-11ea-9388-0242ac110004" in namespace "e2e-tests-projected-wfwlb" to be "success or failure" Dec 16 10:50:27.754: INFO: Pod "pod-projected-secrets-def49026-1ff1-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 18.665685ms Dec 16 10:50:29.772: INFO: Pod "pod-projected-secrets-def49026-1ff1-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036427325s Dec 16 10:50:31.793: INFO: Pod "pod-projected-secrets-def49026-1ff1-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057265758s Dec 16 10:50:34.107: INFO: Pod "pod-projected-secrets-def49026-1ff1-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.371334403s Dec 16 10:50:36.129: INFO: Pod "pod-projected-secrets-def49026-1ff1-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.393681189s Dec 16 10:50:38.145: INFO: Pod "pod-projected-secrets-def49026-1ff1-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.409226047s STEP: Saw pod success Dec 16 10:50:38.145: INFO: Pod "pod-projected-secrets-def49026-1ff1-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 10:50:38.155: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-def49026-1ff1-11ea-9388-0242ac110004 container secret-volume-test: STEP: delete the pod Dec 16 10:50:38.416: INFO: Waiting for pod pod-projected-secrets-def49026-1ff1-11ea-9388-0242ac110004 to disappear Dec 16 10:50:38.453: INFO: Pod pod-projected-secrets-def49026-1ff1-11ea-9388-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 10:50:38.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wfwlb" for this suite. Dec 16 10:50:44.609: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 10:50:44.700: INFO: namespace: e2e-tests-projected-wfwlb, resource: bindings, ignored listing per whitelist Dec 16 10:50:44.740: INFO: namespace e2e-tests-projected-wfwlb deletion completed in 6.248297825s • [SLOW TEST:17.316 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 10:50:44.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 16 10:50:44.913: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e93f3d1d-1ff1-11ea-9388-0242ac110004" in namespace "e2e-tests-projected-p925q" to be "success or failure" Dec 16 10:50:44.925: INFO: Pod "downwardapi-volume-e93f3d1d-1ff1-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 12.531121ms Dec 16 10:50:46.952: INFO: Pod "downwardapi-volume-e93f3d1d-1ff1-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03859881s Dec 16 10:50:48.984: INFO: Pod "downwardapi-volume-e93f3d1d-1ff1-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070842899s Dec 16 10:50:51.403: INFO: Pod "downwardapi-volume-e93f3d1d-1ff1-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.48971002s Dec 16 10:50:53.415: INFO: Pod "downwardapi-volume-e93f3d1d-1ff1-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.50170584s Dec 16 10:50:55.428: INFO: Pod "downwardapi-volume-e93f3d1d-1ff1-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.515526851s STEP: Saw pod success Dec 16 10:50:55.429: INFO: Pod "downwardapi-volume-e93f3d1d-1ff1-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 10:50:55.439: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-e93f3d1d-1ff1-11ea-9388-0242ac110004 container client-container: STEP: delete the pod Dec 16 10:50:55.536: INFO: Waiting for pod downwardapi-volume-e93f3d1d-1ff1-11ea-9388-0242ac110004 to disappear Dec 16 10:50:55.551: INFO: Pod downwardapi-volume-e93f3d1d-1ff1-11ea-9388-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 10:50:55.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-p925q" for this suite. Dec 16 10:51:01.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 10:51:01.836: INFO: namespace: e2e-tests-projected-p925q, resource: bindings, ignored listing per whitelist Dec 16 10:51:01.858: INFO: namespace e2e-tests-projected-p925q deletion completed in 6.29781732s • [SLOW TEST:17.118 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 10:51:01.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 16 10:51:02.069: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f3718768-1ff1-11ea-9388-0242ac110004" in namespace "e2e-tests-downward-api-gjlvk" to be "success or failure" Dec 16 10:51:02.106: INFO: Pod "downwardapi-volume-f3718768-1ff1-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 35.919059ms Dec 16 10:51:04.124: INFO: Pod "downwardapi-volume-f3718768-1ff1-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054754153s Dec 16 10:51:06.139: INFO: Pod "downwardapi-volume-f3718768-1ff1-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069602498s Dec 16 10:51:08.158: INFO: Pod "downwardapi-volume-f3718768-1ff1-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088316474s Dec 16 10:51:10.180: INFO: Pod "downwardapi-volume-f3718768-1ff1-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.10989852s Dec 16 10:51:12.196: INFO: Pod "downwardapi-volume-f3718768-1ff1-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.126117806s STEP: Saw pod success Dec 16 10:51:12.196: INFO: Pod "downwardapi-volume-f3718768-1ff1-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 10:51:12.206: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-f3718768-1ff1-11ea-9388-0242ac110004 container client-container: STEP: delete the pod Dec 16 10:51:13.309: INFO: Waiting for pod downwardapi-volume-f3718768-1ff1-11ea-9388-0242ac110004 to disappear Dec 16 10:51:13.547: INFO: Pod downwardapi-volume-f3718768-1ff1-11ea-9388-0242ac110004 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 10:51:13.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-gjlvk" for this suite. Dec 16 10:51:19.746: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 10:51:19.926: INFO: namespace: e2e-tests-downward-api-gjlvk, resource: bindings, ignored listing per whitelist Dec 16 10:51:19.951: INFO: namespace e2e-tests-downward-api-gjlvk deletion completed in 6.339783717s • [SLOW TEST:18.093 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 10:51:19.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all Dec 16 10:51:20.150: INFO: Waiting up to 5m0s for pod "client-containers-fe3fcb69-1ff1-11ea-9388-0242ac110004" in namespace "e2e-tests-containers-6hh7c" to be "success or failure" Dec 16 10:51:20.234: INFO: Pod "client-containers-fe3fcb69-1ff1-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 83.842666ms Dec 16 10:51:22.261: INFO: Pod "client-containers-fe3fcb69-1ff1-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11158226s Dec 16 10:51:24.281: INFO: Pod "client-containers-fe3fcb69-1ff1-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.13073551s Dec 16 10:51:26.656: INFO: Pod "client-containers-fe3fcb69-1ff1-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.506359581s Dec 16 10:51:28.678: INFO: Pod "client-containers-fe3fcb69-1ff1-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.527835235s Dec 16 10:51:30.708: INFO: Pod "client-containers-fe3fcb69-1ff1-11ea-9388-0242ac110004": Phase="Running", Reason="", readiness=true. Elapsed: 10.55842351s Dec 16 10:51:32.747: INFO: Pod "client-containers-fe3fcb69-1ff1-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.5971473s STEP: Saw pod success Dec 16 10:51:32.747: INFO: Pod "client-containers-fe3fcb69-1ff1-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 10:51:32.750: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-fe3fcb69-1ff1-11ea-9388-0242ac110004 container test-container: STEP: delete the pod Dec 16 10:51:33.193: INFO: Waiting for pod client-containers-fe3fcb69-1ff1-11ea-9388-0242ac110004 to disappear Dec 16 10:51:33.553: INFO: Pod client-containers-fe3fcb69-1ff1-11ea-9388-0242ac110004 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 10:51:33.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-6hh7c" for this suite. Dec 16 10:51:39.726: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 10:51:40.031: INFO: namespace: e2e-tests-containers-6hh7c, resource: bindings, ignored listing per whitelist Dec 16 10:51:40.065: INFO: namespace e2e-tests-containers-6hh7c deletion completed in 6.492331334s • [SLOW TEST:20.113 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 10:51:40.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 10:51:53.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-h4dqq" for this suite. Dec 16 10:52:17.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 10:52:17.478: INFO: namespace: e2e-tests-replication-controller-h4dqq, resource: bindings, ignored listing per whitelist Dec 16 10:52:17.661: INFO: namespace e2e-tests-replication-controller-h4dqq deletion completed in 24.263259664s • [SLOW TEST:37.597 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 10:52:17.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 10:52:17.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-vj89l" for this suite. Dec 16 10:52:42.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 10:52:42.163: INFO: namespace: e2e-tests-pods-vj89l, resource: bindings, ignored listing per whitelist Dec 16 10:52:42.223: INFO: namespace e2e-tests-pods-vj89l deletion completed in 24.163196663s • [SLOW TEST:24.561 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 10:52:42.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-2f487f9d-1ff2-11ea-9388-0242ac110004 STEP: Creating a pod to test consume configMaps Dec 16 10:52:42.438: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2f4a099e-1ff2-11ea-9388-0242ac110004" in namespace "e2e-tests-projected-8t6bt" to be "success or failure" Dec 16 10:52:42.446: INFO: Pod "pod-projected-configmaps-2f4a099e-1ff2-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.293387ms Dec 16 10:52:44.455: INFO: Pod "pod-projected-configmaps-2f4a099e-1ff2-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01688546s Dec 16 10:52:46.498: INFO: Pod "pod-projected-configmaps-2f4a099e-1ff2-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059144591s Dec 16 10:52:48.664: INFO: Pod "pod-projected-configmaps-2f4a099e-1ff2-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.225255108s Dec 16 10:52:50.753: INFO: Pod "pod-projected-configmaps-2f4a099e-1ff2-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.314971342s Dec 16 10:52:52.796: INFO: Pod "pod-projected-configmaps-2f4a099e-1ff2-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.35725041s STEP: Saw pod success Dec 16 10:52:52.796: INFO: Pod "pod-projected-configmaps-2f4a099e-1ff2-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 10:52:52.808: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-2f4a099e-1ff2-11ea-9388-0242ac110004 container projected-configmap-volume-test: STEP: delete the pod Dec 16 10:52:53.034: INFO: Waiting for pod pod-projected-configmaps-2f4a099e-1ff2-11ea-9388-0242ac110004 to disappear Dec 16 10:52:53.072: INFO: Pod pod-projected-configmaps-2f4a099e-1ff2-11ea-9388-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 10:52:53.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8t6bt" for this suite. Dec 16 10:53:01.115: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 10:53:01.184: INFO: namespace: e2e-tests-projected-8t6bt, resource: bindings, ignored listing per whitelist Dec 16 10:53:01.283: INFO: namespace e2e-tests-projected-8t6bt deletion completed in 8.200550226s • [SLOW TEST:19.060 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 10:53:01.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Dec 16 10:53:12.124: INFO: Successfully updated pod "annotationupdate3aa8ca52-1ff2-11ea-9388-0242ac110004" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 10:53:14.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-tlnkr" for this suite. Dec 16 10:53:38.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 10:53:38.434: INFO: namespace: e2e-tests-projected-tlnkr, resource: bindings, ignored listing per whitelist Dec 16 10:53:38.471: INFO: namespace e2e-tests-projected-tlnkr deletion completed in 24.281738321s • [SLOW TEST:37.188 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 10:53:38.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Dec 16 10:53:38.928: INFO: Number of nodes with available pods: 0 Dec 16 10:53:38.928: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 10:53:39.958: INFO: Number of nodes with available pods: 0 Dec 16 10:53:39.958: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 10:53:41.110: INFO: Number of nodes with available pods: 0 Dec 16 10:53:41.111: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 10:53:41.960: INFO: Number of nodes with available pods: 0 Dec 16 10:53:41.960: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 10:53:42.983: INFO: Number of nodes with available pods: 0 Dec 16 10:53:42.983: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 10:53:43.965: INFO: Number of nodes with available pods: 0 Dec 16 10:53:43.966: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 10:53:46.080: INFO: Number of nodes with available pods: 0 Dec 16 10:53:46.080: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 10:53:46.964: INFO: Number of nodes with available pods: 0 Dec 16 10:53:46.964: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 10:53:47.959: INFO: Number of nodes with available pods: 0 Dec 16 10:53:47.959: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 10:53:48.958: INFO: Number of nodes with available pods: 0 Dec 16 10:53:48.958: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 10:53:49.963: INFO: Number of nodes with available pods: 1 Dec 16 10:53:49.964: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Stop a daemon pod, check that the daemon pod is revived. Dec 16 10:53:50.018: INFO: Number of nodes with available pods: 0 Dec 16 10:53:50.018: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 10:53:51.044: INFO: Number of nodes with available pods: 0 Dec 16 10:53:51.045: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 10:53:52.043: INFO: Number of nodes with available pods: 0 Dec 16 10:53:52.043: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 10:53:53.039: INFO: Number of nodes with available pods: 0 Dec 16 10:53:53.039: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 10:53:54.073: INFO: Number of nodes with available pods: 0 Dec 16 10:53:54.073: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 10:53:55.041: INFO: Number of nodes with available pods: 0 Dec 16 10:53:55.041: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 10:53:56.052: INFO: Number of nodes with available pods: 0 Dec 16 10:53:56.052: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 10:53:57.230: INFO: Number of nodes with available pods: 0 Dec 16 10:53:57.230: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 10:53:58.066: INFO: Number of nodes with available pods: 0 Dec 16 10:53:58.067: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 10:53:59.375: INFO: Number of nodes with available pods: 0 Dec 16 10:53:59.375: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 10:54:00.047: INFO: Number of nodes with available pods: 0 Dec 16 10:54:00.047: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 10:54:01.078: INFO: Number of nodes with available pods: 0 Dec 16 10:54:01.078: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 10:54:02.042: INFO: Number of nodes with available pods: 0 Dec 16 10:54:02.042: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 10:54:03.335: INFO: Number of nodes with available pods: 0 Dec 16 10:54:03.335: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 10:54:04.039: INFO: Number of nodes with available pods: 0 Dec 16 10:54:04.039: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 10:54:05.046: INFO: Number of nodes with available pods: 0 Dec 16 10:54:05.046: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 10:54:06.047: INFO: Number of nodes with available pods: 0 Dec 16 10:54:06.047: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 10:54:07.075: INFO: Number of nodes with available pods: 1 Dec 16 10:54:07.075: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-j28fq, will wait for the garbage collector to delete the pods Dec 16 10:54:07.149: INFO: Deleting DaemonSet.extensions daemon-set took: 13.637376ms Dec 16 10:54:07.350: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.619566ms Dec 16 10:54:22.830: INFO: Number of nodes with available pods: 0 Dec 16 10:54:22.830: INFO: Number of running nodes: 0, number of available pods: 0 Dec 16 10:54:22.844: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-j28fq/daemonsets","resourceVersion":"15000166"},"items":null} Dec 16 10:54:22.857: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-j28fq/pods","resourceVersion":"15000166"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 10:54:22.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-j28fq" for this suite. Dec 16 10:54:28.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 10:54:28.973: INFO: namespace: e2e-tests-daemonsets-j28fq, resource: bindings, ignored listing per whitelist Dec 16 10:54:29.105: INFO: namespace e2e-tests-daemonsets-j28fq deletion completed in 6.224592818s • [SLOW TEST:50.633 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 10:54:29.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Dec 16 10:54:42.437: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 10:54:44.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-2zq2t" for this suite. Dec 16 10:55:08.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 10:55:08.750: INFO: namespace: e2e-tests-replicaset-2zq2t, resource: bindings, ignored listing per whitelist Dec 16 10:55:08.817: INFO: namespace e2e-tests-replicaset-2zq2t deletion completed in 24.698810005s • [SLOW TEST:39.711 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 10:55:08.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy Dec 16 10:55:09.017: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix018953659/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 10:55:09.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-5bw2x" for this suite. Dec 16 10:55:15.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 10:55:15.546: INFO: namespace: e2e-tests-kubectl-5bw2x, resource: bindings, ignored listing per whitelist Dec 16 10:55:15.566: INFO: namespace e2e-tests-kubectl-5bw2x deletion completed in 6.393828955s • [SLOW TEST:6.748 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 10:55:15.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-8abb29c7-1ff2-11ea-9388-0242ac110004 STEP: Creating a pod to test consume secrets Dec 16 10:55:15.884: INFO: Waiting up to 5m0s for pod "pod-secrets-8ac0c7f8-1ff2-11ea-9388-0242ac110004" in namespace "e2e-tests-secrets-8wjdq" to be "success or failure" Dec 16 10:55:15.928: INFO: Pod "pod-secrets-8ac0c7f8-1ff2-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 44.187199ms Dec 16 10:55:18.128: INFO: Pod "pod-secrets-8ac0c7f8-1ff2-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.244014188s Dec 16 10:55:20.139: INFO: Pod "pod-secrets-8ac0c7f8-1ff2-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.254899482s Dec 16 10:55:22.568: INFO: Pod "pod-secrets-8ac0c7f8-1ff2-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.68430998s Dec 16 10:55:24.633: INFO: Pod "pod-secrets-8ac0c7f8-1ff2-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.749179948s Dec 16 10:55:26.655: INFO: Pod "pod-secrets-8ac0c7f8-1ff2-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.770820749s Dec 16 10:55:28.668: INFO: Pod "pod-secrets-8ac0c7f8-1ff2-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.783955547s STEP: Saw pod success Dec 16 10:55:28.668: INFO: Pod "pod-secrets-8ac0c7f8-1ff2-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 10:55:28.672: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-8ac0c7f8-1ff2-11ea-9388-0242ac110004 container secret-env-test: STEP: delete the pod Dec 16 10:55:28.911: INFO: Waiting for pod pod-secrets-8ac0c7f8-1ff2-11ea-9388-0242ac110004 to disappear Dec 16 10:55:28.921: INFO: Pod pod-secrets-8ac0c7f8-1ff2-11ea-9388-0242ac110004 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 10:55:28.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-8wjdq" for this suite. Dec 16 10:55:35.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 10:55:35.925: INFO: namespace: e2e-tests-secrets-8wjdq, resource: bindings, ignored listing per whitelist Dec 16 10:55:35.983: INFO: namespace e2e-tests-secrets-8wjdq deletion completed in 7.040736638s • [SLOW TEST:20.417 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 10:55:35.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium Dec 16 10:55:36.187: INFO: Waiting up to 5m0s for pod "pod-96daa48a-1ff2-11ea-9388-0242ac110004" in namespace "e2e-tests-emptydir-l72m6" to be "success or failure" Dec 16 10:55:36.215: INFO: Pod "pod-96daa48a-1ff2-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 28.357087ms Dec 16 10:55:38.675: INFO: Pod "pod-96daa48a-1ff2-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.488034744s Dec 16 10:55:40.688: INFO: Pod "pod-96daa48a-1ff2-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.500824166s Dec 16 10:55:42.709: INFO: Pod "pod-96daa48a-1ff2-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.521805058s Dec 16 10:55:44.724: INFO: Pod "pod-96daa48a-1ff2-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.536792405s Dec 16 10:55:46.770: INFO: Pod "pod-96daa48a-1ff2-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.582968936s STEP: Saw pod success Dec 16 10:55:46.771: INFO: Pod "pod-96daa48a-1ff2-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 10:55:46.801: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-96daa48a-1ff2-11ea-9388-0242ac110004 container test-container: STEP: delete the pod Dec 16 10:55:46.953: INFO: Waiting for pod pod-96daa48a-1ff2-11ea-9388-0242ac110004 to disappear Dec 16 10:55:46.968: INFO: Pod pod-96daa48a-1ff2-11ea-9388-0242ac110004 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 10:55:46.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-l72m6" for this suite. Dec 16 10:55:53.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 10:55:53.152: INFO: namespace: e2e-tests-emptydir-l72m6, resource: bindings, ignored listing per whitelist Dec 16 10:55:53.304: INFO: namespace e2e-tests-emptydir-l72m6 deletion completed in 6.2342867s • [SLOW TEST:17.320 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 10:55:53.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Dec 16 10:55:53.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4tvbj' Dec 16 10:55:56.024: INFO: stderr: "" Dec 16 10:55:56.025: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 16 10:55:56.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-4tvbj' Dec 16 10:55:56.278: INFO: stderr: "" Dec 16 10:55:56.278: INFO: stdout: "update-demo-nautilus-7nrkv update-demo-nautilus-fvg6f " Dec 16 10:55:56.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7nrkv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4tvbj' Dec 16 10:55:56.451: INFO: stderr: "" Dec 16 10:55:56.452: INFO: stdout: "" Dec 16 10:55:56.452: INFO: update-demo-nautilus-7nrkv is created but not running Dec 16 10:56:01.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-4tvbj' Dec 16 10:56:01.667: INFO: stderr: "" Dec 16 10:56:01.667: INFO: stdout: "update-demo-nautilus-7nrkv update-demo-nautilus-fvg6f " Dec 16 10:56:01.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7nrkv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4tvbj' Dec 16 10:56:01.811: INFO: stderr: "" Dec 16 10:56:01.811: INFO: stdout: "" Dec 16 10:56:01.811: INFO: update-demo-nautilus-7nrkv is created but not running Dec 16 10:56:06.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-4tvbj' Dec 16 10:56:07.501: INFO: stderr: "" Dec 16 10:56:07.502: INFO: stdout: "update-demo-nautilus-7nrkv update-demo-nautilus-fvg6f " Dec 16 10:56:07.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7nrkv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4tvbj' Dec 16 10:56:07.879: INFO: stderr: "" Dec 16 10:56:07.880: INFO: stdout: "" Dec 16 10:56:07.880: INFO: update-demo-nautilus-7nrkv is created but not running Dec 16 10:56:12.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-4tvbj' Dec 16 10:56:13.133: INFO: stderr: "" Dec 16 10:56:13.133: INFO: stdout: "update-demo-nautilus-7nrkv update-demo-nautilus-fvg6f " Dec 16 10:56:13.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7nrkv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4tvbj' Dec 16 10:56:13.251: INFO: stderr: "" Dec 16 10:56:13.251: INFO: stdout: "true" Dec 16 10:56:13.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7nrkv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4tvbj' Dec 16 10:56:13.397: INFO: stderr: "" Dec 16 10:56:13.397: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 16 10:56:13.397: INFO: validating pod update-demo-nautilus-7nrkv Dec 16 10:56:13.426: INFO: got data: { "image": "nautilus.jpg" } Dec 16 10:56:13.426: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 16 10:56:13.426: INFO: update-demo-nautilus-7nrkv is verified up and running Dec 16 10:56:13.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fvg6f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4tvbj' Dec 16 10:56:13.559: INFO: stderr: "" Dec 16 10:56:13.559: INFO: stdout: "true" Dec 16 10:56:13.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fvg6f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4tvbj' Dec 16 10:56:13.740: INFO: stderr: "" Dec 16 10:56:13.740: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 16 10:56:13.740: INFO: validating pod update-demo-nautilus-fvg6f Dec 16 10:56:13.771: INFO: got data: { "image": "nautilus.jpg" } Dec 16 10:56:13.771: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 16 10:56:13.771: INFO: update-demo-nautilus-fvg6f is verified up and running STEP: using delete to clean up resources Dec 16 10:56:13.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-4tvbj' Dec 16 10:56:13.927: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 16 10:56:13.928: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Dec 16 10:56:13.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-4tvbj' Dec 16 10:56:14.240: INFO: stderr: "No resources found.\n" Dec 16 10:56:14.240: INFO: stdout: "" Dec 16 10:56:14.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-4tvbj -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 16 10:56:14.442: INFO: stderr: "" Dec 16 10:56:14.442: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 10:56:14.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-4tvbj" for this suite. Dec 16 10:56:38.592: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 10:56:38.622: INFO: namespace: e2e-tests-kubectl-4tvbj, resource: bindings, ignored listing per whitelist Dec 16 10:56:38.724: INFO: namespace e2e-tests-kubectl-4tvbj deletion completed in 24.261398122s • [SLOW TEST:45.420 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 10:56:38.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Dec 16 10:56:38.985: INFO: Waiting up to 5m0s for pod "pod-bc47f21a-1ff2-11ea-9388-0242ac110004" in namespace "e2e-tests-emptydir-bj2bl" to be "success or failure" Dec 16 10:56:39.161: INFO: Pod "pod-bc47f21a-1ff2-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 175.236826ms Dec 16 10:56:41.211: INFO: Pod "pod-bc47f21a-1ff2-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224995236s Dec 16 10:56:43.223: INFO: Pod "pod-bc47f21a-1ff2-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.236980739s Dec 16 10:56:45.234: INFO: Pod "pod-bc47f21a-1ff2-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.248222042s Dec 16 10:56:47.246: INFO: Pod "pod-bc47f21a-1ff2-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.26072463s Dec 16 10:56:49.261: INFO: Pod "pod-bc47f21a-1ff2-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.275407904s Dec 16 10:56:51.282: INFO: Pod "pod-bc47f21a-1ff2-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.296814751s STEP: Saw pod success Dec 16 10:56:51.283: INFO: Pod "pod-bc47f21a-1ff2-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 10:56:51.292: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-bc47f21a-1ff2-11ea-9388-0242ac110004 container test-container: STEP: delete the pod Dec 16 10:56:52.111: INFO: Waiting for pod pod-bc47f21a-1ff2-11ea-9388-0242ac110004 to disappear Dec 16 10:56:52.125: INFO: Pod pod-bc47f21a-1ff2-11ea-9388-0242ac110004 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 10:56:52.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-bj2bl" for this suite. Dec 16 10:57:00.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 10:57:00.253: INFO: namespace: e2e-tests-emptydir-bj2bl, resource: bindings, ignored listing per whitelist Dec 16 10:57:00.308: INFO: namespace e2e-tests-emptydir-bj2bl deletion completed in 8.168208632s • [SLOW TEST:21.584 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 10:57:00.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-c929024b-1ff2-11ea-9388-0242ac110004 STEP: Creating a pod to test consume secrets Dec 16 10:57:00.836: INFO: Waiting up to 5m0s for pod "pod-secrets-c95077b2-1ff2-11ea-9388-0242ac110004" in namespace "e2e-tests-secrets-9dxx9" to be "success or failure" Dec 16 10:57:00.876: INFO: Pod "pod-secrets-c95077b2-1ff2-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 39.53043ms Dec 16 10:57:02.897: INFO: Pod "pod-secrets-c95077b2-1ff2-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060930481s Dec 16 10:57:04.906: INFO: Pod "pod-secrets-c95077b2-1ff2-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069972052s Dec 16 10:57:07.671: INFO: Pod "pod-secrets-c95077b2-1ff2-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.834846343s Dec 16 10:57:10.159: INFO: Pod "pod-secrets-c95077b2-1ff2-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.322748718s Dec 16 10:57:12.212: INFO: Pod "pod-secrets-c95077b2-1ff2-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.375214909s STEP: Saw pod success Dec 16 10:57:12.212: INFO: Pod "pod-secrets-c95077b2-1ff2-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 10:57:12.217: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-c95077b2-1ff2-11ea-9388-0242ac110004 container secret-volume-test: STEP: delete the pod Dec 16 10:57:12.292: INFO: Waiting for pod pod-secrets-c95077b2-1ff2-11ea-9388-0242ac110004 to disappear Dec 16 10:57:12.301: INFO: Pod pod-secrets-c95077b2-1ff2-11ea-9388-0242ac110004 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 10:57:12.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-9dxx9" for this suite. Dec 16 10:57:18.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 10:57:18.650: INFO: namespace: e2e-tests-secrets-9dxx9, resource: bindings, ignored listing per whitelist Dec 16 10:57:18.754: INFO: namespace e2e-tests-secrets-9dxx9 deletion completed in 6.397688557s STEP: Destroying namespace "e2e-tests-secret-namespace-6hkfq" for this suite. Dec 16 10:57:24.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 10:57:25.110: INFO: namespace: e2e-tests-secret-namespace-6hkfq, resource: bindings, ignored listing per whitelist Dec 16 10:57:25.279: INFO: namespace e2e-tests-secret-namespace-6hkfq deletion completed in 6.524821326s • [SLOW TEST:24.971 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 10:57:25.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-9dnn6 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-9dnn6 to expose endpoints map[] Dec 16 10:57:25.695: INFO: Get endpoints failed (15.01435ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Dec 16 10:57:26.712: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-9dnn6 exposes endpoints map[] (1.03271741s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-9dnn6 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-9dnn6 to expose endpoints map[pod1:[80]] Dec 16 10:57:30.897: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.161679086s elapsed, will retry) Dec 16 10:57:37.136: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (10.401353913s elapsed, will retry) Dec 16 10:57:38.185: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-9dnn6 exposes endpoints map[pod1:[80]] (11.449731175s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-9dnn6 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-9dnn6 to expose endpoints map[pod1:[80] pod2:[80]] Dec 16 10:57:43.398: INFO: Unexpected endpoints: found map[d8c07d1b-1ff2-11ea-a994-fa163e34d433:[80]], expected map[pod2:[80] pod1:[80]] (5.195277236s elapsed, will retry) Dec 16 10:57:48.054: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-9dnn6 exposes endpoints map[pod1:[80] pod2:[80]] (9.851236615s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-9dnn6 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-9dnn6 to expose endpoints map[pod2:[80]] Dec 16 10:57:49.442: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-9dnn6 exposes endpoints map[pod2:[80]] (1.373553172s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-9dnn6 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-9dnn6 to expose endpoints map[] Dec 16 10:57:51.317: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-9dnn6 exposes endpoints map[] (1.649457829s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 10:57:51.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-9dnn6" for this suite. Dec 16 10:58:15.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 10:58:15.470: INFO: namespace: e2e-tests-services-9dnn6, resource: bindings, ignored listing per whitelist Dec 16 10:58:15.565: INFO: namespace e2e-tests-services-9dnn6 deletion completed in 22.976974514s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:50.285 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 10:58:15.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-ws2dn Dec 16 10:58:26.877: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-ws2dn STEP: checking the pod's current state and verifying that restartCount is present Dec 16 10:58:26.918: INFO: Initial restart count of pod liveness-http is 0 Dec 16 10:58:55.183: INFO: Restart count of pod e2e-tests-container-probe-ws2dn/liveness-http is now 1 (28.264671604s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 10:58:55.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-ws2dn" for this suite. Dec 16 10:59:03.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 10:59:03.471: INFO: namespace: e2e-tests-container-probe-ws2dn, resource: bindings, ignored listing per whitelist Dec 16 10:59:03.530: INFO: namespace e2e-tests-container-probe-ws2dn deletion completed in 8.293004215s • [SLOW TEST:47.966 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 10:59:03.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Dec 16 10:59:04.036: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-nhgvp,SelfLink:/api/v1/namespaces/e2e-tests-watch-nhgvp/configmaps/e2e-watch-test-resource-version,UID:129b9408-1ff3-11ea-a994-fa163e34d433,ResourceVersion:15000816,Generation:0,CreationTimestamp:2019-12-16 10:59:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 16 10:59:04.037: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-nhgvp,SelfLink:/api/v1/namespaces/e2e-tests-watch-nhgvp/configmaps/e2e-watch-test-resource-version,UID:129b9408-1ff3-11ea-a994-fa163e34d433,ResourceVersion:15000817,Generation:0,CreationTimestamp:2019-12-16 10:59:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 10:59:04.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-nhgvp" for this suite. Dec 16 10:59:10.080: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 10:59:10.140: INFO: namespace: e2e-tests-watch-nhgvp, resource: bindings, ignored listing per whitelist Dec 16 10:59:10.225: INFO: namespace e2e-tests-watch-nhgvp deletion completed in 6.181487059s • [SLOW TEST:6.694 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 10:59:10.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Dec 16 10:59:10.450: INFO: Waiting up to 5m0s for pod "downward-api-16824526-1ff3-11ea-9388-0242ac110004" in namespace "e2e-tests-downward-api-hqp25" to be "success or failure" Dec 16 10:59:10.462: INFO: Pod "downward-api-16824526-1ff3-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.606811ms Dec 16 10:59:13.083: INFO: Pod "downward-api-16824526-1ff3-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.632852097s Dec 16 10:59:15.101: INFO: Pod "downward-api-16824526-1ff3-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.650778127s Dec 16 10:59:17.283: INFO: Pod "downward-api-16824526-1ff3-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.832725473s Dec 16 10:59:19.300: INFO: Pod "downward-api-16824526-1ff3-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.849468299s Dec 16 10:59:21.319: INFO: Pod "downward-api-16824526-1ff3-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.868884331s STEP: Saw pod success Dec 16 10:59:21.320: INFO: Pod "downward-api-16824526-1ff3-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 10:59:21.332: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-16824526-1ff3-11ea-9388-0242ac110004 container dapi-container: STEP: delete the pod Dec 16 10:59:21.441: INFO: Waiting for pod downward-api-16824526-1ff3-11ea-9388-0242ac110004 to disappear Dec 16 10:59:21.482: INFO: Pod downward-api-16824526-1ff3-11ea-9388-0242ac110004 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 10:59:21.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-hqp25" for this suite. Dec 16 10:59:27.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 10:59:27.613: INFO: namespace: e2e-tests-downward-api-hqp25, resource: bindings, ignored listing per whitelist Dec 16 10:59:27.665: INFO: namespace e2e-tests-downward-api-hqp25 deletion completed in 6.170533461s • [SLOW TEST:17.439 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 10:59:27.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Dec 16 10:59:27.911: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-8w67w,SelfLink:/api/v1/namespaces/e2e-tests-watch-8w67w/configmaps/e2e-watch-test-configmap-a,UID:20f0e4a9-1ff3-11ea-a994-fa163e34d433,ResourceVersion:15000879,Generation:0,CreationTimestamp:2019-12-16 10:59:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 16 10:59:27.912: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-8w67w,SelfLink:/api/v1/namespaces/e2e-tests-watch-8w67w/configmaps/e2e-watch-test-configmap-a,UID:20f0e4a9-1ff3-11ea-a994-fa163e34d433,ResourceVersion:15000879,Generation:0,CreationTimestamp:2019-12-16 10:59:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Dec 16 10:59:37.947: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-8w67w,SelfLink:/api/v1/namespaces/e2e-tests-watch-8w67w/configmaps/e2e-watch-test-configmap-a,UID:20f0e4a9-1ff3-11ea-a994-fa163e34d433,ResourceVersion:15000892,Generation:0,CreationTimestamp:2019-12-16 10:59:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Dec 16 10:59:37.948: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-8w67w,SelfLink:/api/v1/namespaces/e2e-tests-watch-8w67w/configmaps/e2e-watch-test-configmap-a,UID:20f0e4a9-1ff3-11ea-a994-fa163e34d433,ResourceVersion:15000892,Generation:0,CreationTimestamp:2019-12-16 10:59:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Dec 16 10:59:48.004: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-8w67w,SelfLink:/api/v1/namespaces/e2e-tests-watch-8w67w/configmaps/e2e-watch-test-configmap-a,UID:20f0e4a9-1ff3-11ea-a994-fa163e34d433,ResourceVersion:15000904,Generation:0,CreationTimestamp:2019-12-16 10:59:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 16 10:59:48.005: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-8w67w,SelfLink:/api/v1/namespaces/e2e-tests-watch-8w67w/configmaps/e2e-watch-test-configmap-a,UID:20f0e4a9-1ff3-11ea-a994-fa163e34d433,ResourceVersion:15000904,Generation:0,CreationTimestamp:2019-12-16 10:59:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Dec 16 10:59:58.039: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-8w67w,SelfLink:/api/v1/namespaces/e2e-tests-watch-8w67w/configmaps/e2e-watch-test-configmap-a,UID:20f0e4a9-1ff3-11ea-a994-fa163e34d433,ResourceVersion:15000917,Generation:0,CreationTimestamp:2019-12-16 10:59:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 16 10:59:58.040: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-8w67w,SelfLink:/api/v1/namespaces/e2e-tests-watch-8w67w/configmaps/e2e-watch-test-configmap-a,UID:20f0e4a9-1ff3-11ea-a994-fa163e34d433,ResourceVersion:15000917,Generation:0,CreationTimestamp:2019-12-16 10:59:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Dec 16 11:00:08.059: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-8w67w,SelfLink:/api/v1/namespaces/e2e-tests-watch-8w67w/configmaps/e2e-watch-test-configmap-b,UID:38e7e95a-1ff3-11ea-a994-fa163e34d433,ResourceVersion:15000930,Generation:0,CreationTimestamp:2019-12-16 11:00:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 16 11:00:08.059: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-8w67w,SelfLink:/api/v1/namespaces/e2e-tests-watch-8w67w/configmaps/e2e-watch-test-configmap-b,UID:38e7e95a-1ff3-11ea-a994-fa163e34d433,ResourceVersion:15000930,Generation:0,CreationTimestamp:2019-12-16 11:00:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Dec 16 11:00:18.073: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-8w67w,SelfLink:/api/v1/namespaces/e2e-tests-watch-8w67w/configmaps/e2e-watch-test-configmap-b,UID:38e7e95a-1ff3-11ea-a994-fa163e34d433,ResourceVersion:15000942,Generation:0,CreationTimestamp:2019-12-16 11:00:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 16 11:00:18.073: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-8w67w,SelfLink:/api/v1/namespaces/e2e-tests-watch-8w67w/configmaps/e2e-watch-test-configmap-b,UID:38e7e95a-1ff3-11ea-a994-fa163e34d433,ResourceVersion:15000942,Generation:0,CreationTimestamp:2019-12-16 11:00:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:00:28.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-8w67w" for this suite. Dec 16 11:00:34.118: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:00:34.243: INFO: namespace: e2e-tests-watch-8w67w, resource: bindings, ignored listing per whitelist Dec 16 11:00:34.263: INFO: namespace e2e-tests-watch-8w67w deletion completed in 6.173889972s • [SLOW TEST:66.598 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:00:34.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-kbmfs [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet Dec 16 11:00:34.475: INFO: Found 0 stateful pods, waiting for 3 Dec 16 11:00:44.492: INFO: Found 1 stateful pods, waiting for 3 Dec 16 11:00:54.697: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 16 11:00:54.697: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 16 11:00:54.697: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Dec 16 11:01:04.524: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 16 11:01:04.524: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 16 11:01:04.524: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Dec 16 11:01:04.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kbmfs ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 16 11:01:05.378: INFO: stderr: "" Dec 16 11:01:05.378: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 16 11:01:05.378: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Dec 16 11:01:15.503: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Dec 16 11:01:25.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kbmfs ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 16 11:01:26.392: INFO: stderr: "" Dec 16 11:01:26.392: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 16 11:01:26.392: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 16 11:01:36.712: INFO: Waiting for StatefulSet e2e-tests-statefulset-kbmfs/ss2 to complete update Dec 16 11:01:36.712: INFO: Waiting for Pod e2e-tests-statefulset-kbmfs/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 16 11:01:36.712: INFO: Waiting for Pod e2e-tests-statefulset-kbmfs/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 16 11:01:46.929: INFO: Waiting for StatefulSet e2e-tests-statefulset-kbmfs/ss2 to complete update Dec 16 11:01:46.929: INFO: Waiting for Pod e2e-tests-statefulset-kbmfs/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 16 11:01:46.929: INFO: Waiting for Pod e2e-tests-statefulset-kbmfs/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 16 11:01:56.740: INFO: Waiting for StatefulSet e2e-tests-statefulset-kbmfs/ss2 to complete update Dec 16 11:01:56.740: INFO: Waiting for Pod e2e-tests-statefulset-kbmfs/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 16 11:02:06.744: INFO: Waiting for StatefulSet e2e-tests-statefulset-kbmfs/ss2 to complete update Dec 16 11:02:06.745: INFO: Waiting for Pod e2e-tests-statefulset-kbmfs/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 16 11:02:16.774: INFO: Waiting for StatefulSet e2e-tests-statefulset-kbmfs/ss2 to complete update STEP: Rolling back to a previous revision Dec 16 11:02:26.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kbmfs ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 16 11:02:27.340: INFO: stderr: "" Dec 16 11:02:27.340: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 16 11:02:27.340: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 16 11:02:37.473: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Dec 16 11:02:47.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-kbmfs ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 16 11:02:48.240: INFO: stderr: "" Dec 16 11:02:48.241: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 16 11:02:48.241: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 16 11:02:58.344: INFO: Waiting for StatefulSet e2e-tests-statefulset-kbmfs/ss2 to complete update Dec 16 11:02:58.344: INFO: Waiting for Pod e2e-tests-statefulset-kbmfs/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 16 11:02:58.344: INFO: Waiting for Pod e2e-tests-statefulset-kbmfs/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 16 11:03:08.615: INFO: Waiting for StatefulSet e2e-tests-statefulset-kbmfs/ss2 to complete update Dec 16 11:03:08.616: INFO: Waiting for Pod e2e-tests-statefulset-kbmfs/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 16 11:03:08.616: INFO: Waiting for Pod e2e-tests-statefulset-kbmfs/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 16 11:03:18.390: INFO: Waiting for StatefulSet e2e-tests-statefulset-kbmfs/ss2 to complete update Dec 16 11:03:18.390: INFO: Waiting for Pod e2e-tests-statefulset-kbmfs/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 16 11:03:18.390: INFO: Waiting for Pod e2e-tests-statefulset-kbmfs/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 16 11:03:28.391: INFO: Waiting for StatefulSet e2e-tests-statefulset-kbmfs/ss2 to complete update Dec 16 11:03:28.392: INFO: Waiting for Pod e2e-tests-statefulset-kbmfs/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 16 11:03:38.360: INFO: Waiting for StatefulSet e2e-tests-statefulset-kbmfs/ss2 to complete update Dec 16 11:03:38.360: INFO: Waiting for Pod e2e-tests-statefulset-kbmfs/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 16 11:03:48.854: INFO: Waiting for StatefulSet e2e-tests-statefulset-kbmfs/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Dec 16 11:03:58.376: INFO: Deleting all statefulset in ns e2e-tests-statefulset-kbmfs Dec 16 11:03:58.382: INFO: Scaling statefulset ss2 to 0 Dec 16 11:04:18.495: INFO: Waiting for statefulset status.replicas updated to 0 Dec 16 11:04:18.533: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:04:18.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-kbmfs" for this suite. Dec 16 11:04:26.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:04:26.980: INFO: namespace: e2e-tests-statefulset-kbmfs, resource: bindings, ignored listing per whitelist Dec 16 11:04:27.034: INFO: namespace e2e-tests-statefulset-kbmfs deletion completed in 8.275070873s • [SLOW TEST:232.771 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:04:27.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:05:29.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-vvpmd" for this suite. Dec 16 11:05:35.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:05:35.521: INFO: namespace: e2e-tests-container-runtime-vvpmd, resource: bindings, ignored listing per whitelist Dec 16 11:05:35.539: INFO: namespace e2e-tests-container-runtime-vvpmd deletion completed in 6.255319926s • [SLOW TEST:68.504 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:05:35.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 16 11:05:35.769: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Dec 16 11:05:35.796: INFO: Number of nodes with available pods: 0 Dec 16 11:05:35.796: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:05:37.189: INFO: Number of nodes with available pods: 0 Dec 16 11:05:37.189: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:05:37.857: INFO: Number of nodes with available pods: 0 Dec 16 11:05:37.857: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:05:38.959: INFO: Number of nodes with available pods: 0 Dec 16 11:05:38.959: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:05:39.880: INFO: Number of nodes with available pods: 0 Dec 16 11:05:39.881: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:05:40.844: INFO: Number of nodes with available pods: 0 Dec 16 11:05:40.845: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:05:42.370: INFO: Number of nodes with available pods: 0 Dec 16 11:05:42.370: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:05:43.021: INFO: Number of nodes with available pods: 0 Dec 16 11:05:43.021: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:05:43.830: INFO: Number of nodes with available pods: 0 Dec 16 11:05:43.830: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:05:44.850: INFO: Number of nodes with available pods: 0 Dec 16 11:05:44.850: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:05:45.885: INFO: Number of nodes with available pods: 0 Dec 16 11:05:45.885: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:05:46.820: INFO: Number of nodes with available pods: 1 Dec 16 11:05:46.820: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Dec 16 11:05:47.007: INFO: Wrong image for pod: daemon-set-tzmzq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 11:05:48.134: INFO: Wrong image for pod: daemon-set-tzmzq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 11:05:49.135: INFO: Wrong image for pod: daemon-set-tzmzq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 11:05:50.126: INFO: Wrong image for pod: daemon-set-tzmzq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 11:05:51.148: INFO: Wrong image for pod: daemon-set-tzmzq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 11:05:52.126: INFO: Wrong image for pod: daemon-set-tzmzq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 11:05:53.130: INFO: Wrong image for pod: daemon-set-tzmzq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 11:05:54.147: INFO: Wrong image for pod: daemon-set-tzmzq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Dec 16 11:05:54.147: INFO: Pod daemon-set-tzmzq is not available Dec 16 11:05:55.147: INFO: Pod daemon-set-7zbfc is not available STEP: Check that daemon pods are still running on every node of the cluster. Dec 16 11:05:55.169: INFO: Number of nodes with available pods: 0 Dec 16 11:05:55.170: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:05:56.196: INFO: Number of nodes with available pods: 0 Dec 16 11:05:56.196: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:05:57.192: INFO: Number of nodes with available pods: 0 Dec 16 11:05:57.192: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:05:58.200: INFO: Number of nodes with available pods: 0 Dec 16 11:05:58.200: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:05:59.196: INFO: Number of nodes with available pods: 0 Dec 16 11:05:59.196: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:06:00.196: INFO: Number of nodes with available pods: 0 Dec 16 11:06:00.196: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:06:01.629: INFO: Number of nodes with available pods: 0 Dec 16 11:06:01.629: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:06:02.346: INFO: Number of nodes with available pods: 0 Dec 16 11:06:02.346: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:06:03.188: INFO: Number of nodes with available pods: 0 Dec 16 11:06:03.188: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:06:04.210: INFO: Number of nodes with available pods: 1 Dec 16 11:06:04.210: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-tqjfr, will wait for the garbage collector to delete the pods Dec 16 11:06:04.379: INFO: Deleting DaemonSet.extensions daemon-set took: 16.109561ms Dec 16 11:06:04.479: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.572789ms Dec 16 11:06:22.696: INFO: Number of nodes with available pods: 0 Dec 16 11:06:22.696: INFO: Number of running nodes: 0, number of available pods: 0 Dec 16 11:06:22.701: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-tqjfr/daemonsets","resourceVersion":"15001850"},"items":null} Dec 16 11:06:22.704: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-tqjfr/pods","resourceVersion":"15001850"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:06:22.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-tqjfr" for this suite. Dec 16 11:06:28.820: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:06:28.914: INFO: namespace: e2e-tests-daemonsets-tqjfr, resource: bindings, ignored listing per whitelist Dec 16 11:06:28.957: INFO: namespace e2e-tests-daemonsets-tqjfr deletion completed in 6.236912564s • [SLOW TEST:53.418 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:06:28.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-1c1a131e-1ff4-11ea-9388-0242ac110004 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:06:43.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-rb9bw" for this suite. Dec 16 11:07:07.456: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:07:07.614: INFO: namespace: e2e-tests-configmap-rb9bw, resource: bindings, ignored listing per whitelist Dec 16 11:07:07.618: INFO: namespace e2e-tests-configmap-rb9bw deletion completed in 24.238279058s • [SLOW TEST:38.660 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:07:07.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Dec 16 11:07:07.854: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:07:25.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-7ddzs" for this suite. Dec 16 11:07:33.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:07:33.680: INFO: namespace: e2e-tests-init-container-7ddzs, resource: bindings, ignored listing per whitelist Dec 16 11:07:33.995: INFO: namespace e2e-tests-init-container-7ddzs deletion completed in 8.437555678s • [SLOW TEST:26.377 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:07:33.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:07:34.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-7bnjh" for this suite. Dec 16 11:07:40.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:07:40.536: INFO: namespace: e2e-tests-services-7bnjh, resource: bindings, ignored listing per whitelist Dec 16 11:07:40.542: INFO: namespace e2e-tests-services-7bnjh deletion completed in 6.23692486s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.546 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:07:40.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 16 11:07:40.816: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Dec 16 11:07:40.929: INFO: Number of nodes with available pods: 0 Dec 16 11:07:40.929: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Dec 16 11:07:41.021: INFO: Number of nodes with available pods: 0 Dec 16 11:07:41.021: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:07:42.058: INFO: Number of nodes with available pods: 0 Dec 16 11:07:42.058: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:07:43.045: INFO: Number of nodes with available pods: 0 Dec 16 11:07:43.045: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:07:44.048: INFO: Number of nodes with available pods: 0 Dec 16 11:07:44.048: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:07:45.084: INFO: Number of nodes with available pods: 0 Dec 16 11:07:45.084: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:07:46.718: INFO: Number of nodes with available pods: 0 Dec 16 11:07:46.718: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:07:47.040: INFO: Number of nodes with available pods: 0 Dec 16 11:07:47.040: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:07:48.076: INFO: Number of nodes with available pods: 0 Dec 16 11:07:48.076: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:07:49.041: INFO: Number of nodes with available pods: 0 Dec 16 11:07:49.041: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:07:50.051: INFO: Number of nodes with available pods: 1 Dec 16 11:07:50.052: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Dec 16 11:07:50.216: INFO: Number of nodes with available pods: 1 Dec 16 11:07:50.216: INFO: Number of running nodes: 0, number of available pods: 1 Dec 16 11:07:51.243: INFO: Number of nodes with available pods: 0 Dec 16 11:07:51.243: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Dec 16 11:07:51.279: INFO: Number of nodes with available pods: 0 Dec 16 11:07:51.279: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:07:52.295: INFO: Number of nodes with available pods: 0 Dec 16 11:07:52.295: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:07:53.429: INFO: Number of nodes with available pods: 0 Dec 16 11:07:53.429: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:07:54.291: INFO: Number of nodes with available pods: 0 Dec 16 11:07:54.291: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:07:55.437: INFO: Number of nodes with available pods: 0 Dec 16 11:07:55.437: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:07:56.313: INFO: Number of nodes with available pods: 0 Dec 16 11:07:56.313: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:07:57.303: INFO: Number of nodes with available pods: 0 Dec 16 11:07:57.304: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:07:58.296: INFO: Number of nodes with available pods: 0 Dec 16 11:07:58.296: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:07:59.306: INFO: Number of nodes with available pods: 0 Dec 16 11:07:59.306: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:08:00.296: INFO: Number of nodes with available pods: 0 Dec 16 11:08:00.296: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:08:01.349: INFO: Number of nodes with available pods: 0 Dec 16 11:08:01.349: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:08:02.293: INFO: Number of nodes with available pods: 0 Dec 16 11:08:02.293: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:08:03.309: INFO: Number of nodes with available pods: 0 Dec 16 11:08:03.310: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:08:04.293: INFO: Number of nodes with available pods: 0 Dec 16 11:08:04.293: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:08:05.297: INFO: Number of nodes with available pods: 0 Dec 16 11:08:05.297: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:08:06.290: INFO: Number of nodes with available pods: 0 Dec 16 11:08:06.291: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:08:07.297: INFO: Number of nodes with available pods: 0 Dec 16 11:08:07.297: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:08:09.014: INFO: Number of nodes with available pods: 0 Dec 16 11:08:09.014: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:08:10.463: INFO: Number of nodes with available pods: 0 Dec 16 11:08:10.463: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:08:11.377: INFO: Number of nodes with available pods: 0 Dec 16 11:08:11.377: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:08:12.313: INFO: Number of nodes with available pods: 0 Dec 16 11:08:12.314: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:08:13.314: INFO: Number of nodes with available pods: 0 Dec 16 11:08:13.314: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 16 11:08:14.298: INFO: Number of nodes with available pods: 1 Dec 16 11:08:14.298: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-wk4n5, will wait for the garbage collector to delete the pods Dec 16 11:08:14.410: INFO: Deleting DaemonSet.extensions daemon-set took: 37.873364ms Dec 16 11:08:14.611: INFO: Terminating DaemonSet.extensions daemon-set pods took: 201.031591ms Dec 16 11:08:32.749: INFO: Number of nodes with available pods: 0 Dec 16 11:08:32.749: INFO: Number of running nodes: 0, number of available pods: 0 Dec 16 11:08:32.755: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-wk4n5/daemonsets","resourceVersion":"15002153"},"items":null} Dec 16 11:08:32.759: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-wk4n5/pods","resourceVersion":"15002153"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:08:32.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-wk4n5" for this suite. Dec 16 11:08:40.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:08:41.002: INFO: namespace: e2e-tests-daemonsets-wk4n5, resource: bindings, ignored listing per whitelist Dec 16 11:08:41.085: INFO: namespace e2e-tests-daemonsets-wk4n5 deletion completed in 8.254352398s • [SLOW TEST:60.542 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:08:41.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Dec 16 11:08:41.227: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Dec 16 11:08:41.289: INFO: Waiting for terminating namespaces to be deleted... Dec 16 11:08:41.295: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test Dec 16 11:08:41.322: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Dec 16 11:08:41.322: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded) Dec 16 11:08:41.322: INFO: Container weave ready: true, restart count 0 Dec 16 11:08:41.322: INFO: Container weave-npc ready: true, restart count 0 Dec 16 11:08:41.322: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Dec 16 11:08:41.322: INFO: Container coredns ready: true, restart count 0 Dec 16 11:08:41.322: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Dec 16 11:08:41.322: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Dec 16 11:08:41.322: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Dec 16 11:08:41.322: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Dec 16 11:08:41.322: INFO: Container coredns ready: true, restart count 0 Dec 16 11:08:41.322: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded) Dec 16 11:08:41.322: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-70e837b5-1ff4-11ea-9388-0242ac110004 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-70e837b5-1ff4-11ea-9388-0242ac110004 off the node hunter-server-hu5at5svl7ps STEP: verifying the node doesn't have the label kubernetes.io/e2e-70e837b5-1ff4-11ea-9388-0242ac110004 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:09:03.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-rj9j6" for this suite. Dec 16 11:09:15.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:09:15.940: INFO: namespace: e2e-tests-sched-pred-rj9j6, resource: bindings, ignored listing per whitelist Dec 16 11:09:16.083: INFO: namespace e2e-tests-sched-pred-rj9j6 deletion completed in 12.277662352s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:34.998 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:09:16.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions Dec 16 11:09:16.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Dec 16 11:09:16.620: INFO: stderr: "" Dec 16 11:09:16.621: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:09:16.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-fc7mv" for this suite. Dec 16 11:09:22.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:09:22.741: INFO: namespace: e2e-tests-kubectl-fc7mv, resource: bindings, ignored listing per whitelist Dec 16 11:09:22.854: INFO: namespace e2e-tests-kubectl-fc7mv deletion completed in 6.222042065s • [SLOW TEST:6.770 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:09:22.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W1216 11:09:33.205948 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 16 11:09:33.206: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:09:33.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-42vq6" for this suite. Dec 16 11:09:39.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:09:39.294: INFO: namespace: e2e-tests-gc-42vq6, resource: bindings, ignored listing per whitelist Dec 16 11:09:39.427: INFO: namespace e2e-tests-gc-42vq6 deletion completed in 6.217510291s • [SLOW TEST:16.573 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:09:39.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 16 11:09:39.562: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:09:40.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-rdq6b" for this suite. Dec 16 11:09:46.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:09:46.904: INFO: namespace: e2e-tests-custom-resource-definition-rdq6b, resource: bindings, ignored listing per whitelist Dec 16 11:09:46.975: INFO: namespace e2e-tests-custom-resource-definition-rdq6b deletion completed in 6.210868619s • [SLOW TEST:7.547 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:09:46.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-920e9642-1ff4-11ea-9388-0242ac110004 STEP: Creating a pod to test consume secrets Dec 16 11:09:47.215: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-921759fc-1ff4-11ea-9388-0242ac110004" in namespace "e2e-tests-projected-b7rkd" to be "success or failure" Dec 16 11:09:47.247: INFO: Pod "pod-projected-secrets-921759fc-1ff4-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 31.50368ms Dec 16 11:09:49.292: INFO: Pod "pod-projected-secrets-921759fc-1ff4-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076756355s Dec 16 11:09:51.307: INFO: Pod "pod-projected-secrets-921759fc-1ff4-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091785676s Dec 16 11:09:53.329: INFO: Pod "pod-projected-secrets-921759fc-1ff4-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113875436s Dec 16 11:09:55.409: INFO: Pod "pod-projected-secrets-921759fc-1ff4-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.194046836s Dec 16 11:09:57.422: INFO: Pod "pod-projected-secrets-921759fc-1ff4-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.206453782s Dec 16 11:09:59.441: INFO: Pod "pod-projected-secrets-921759fc-1ff4-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.226178515s STEP: Saw pod success Dec 16 11:09:59.442: INFO: Pod "pod-projected-secrets-921759fc-1ff4-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 11:09:59.450: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-921759fc-1ff4-11ea-9388-0242ac110004 container projected-secret-volume-test: STEP: delete the pod Dec 16 11:10:00.348: INFO: Waiting for pod pod-projected-secrets-921759fc-1ff4-11ea-9388-0242ac110004 to disappear Dec 16 11:10:00.364: INFO: Pod pod-projected-secrets-921759fc-1ff4-11ea-9388-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:10:00.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-b7rkd" for this suite. Dec 16 11:10:06.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:10:06.480: INFO: namespace: e2e-tests-projected-b7rkd, resource: bindings, ignored listing per whitelist Dec 16 11:10:06.651: INFO: namespace e2e-tests-projected-b7rkd deletion completed in 6.276010633s • [SLOW TEST:19.676 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:10:06.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token Dec 16 11:10:07.415: INFO: Waiting up to 5m0s for pod "pod-service-account-9e24ec3c-1ff4-11ea-9388-0242ac110004-ffrzz" in namespace "e2e-tests-svcaccounts-5pxx5" to be "success or failure" Dec 16 11:10:07.435: INFO: Pod "pod-service-account-9e24ec3c-1ff4-11ea-9388-0242ac110004-ffrzz": Phase="Pending", Reason="", readiness=false. Elapsed: 20.172197ms Dec 16 11:10:09.459: INFO: Pod "pod-service-account-9e24ec3c-1ff4-11ea-9388-0242ac110004-ffrzz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043819857s Dec 16 11:10:11.482: INFO: Pod "pod-service-account-9e24ec3c-1ff4-11ea-9388-0242ac110004-ffrzz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067219519s Dec 16 11:10:13.743: INFO: Pod "pod-service-account-9e24ec3c-1ff4-11ea-9388-0242ac110004-ffrzz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.328260538s Dec 16 11:10:15.756: INFO: Pod "pod-service-account-9e24ec3c-1ff4-11ea-9388-0242ac110004-ffrzz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.341063649s Dec 16 11:10:17.780: INFO: Pod "pod-service-account-9e24ec3c-1ff4-11ea-9388-0242ac110004-ffrzz": Phase="Pending", Reason="", readiness=false. Elapsed: 10.36440435s Dec 16 11:10:19.883: INFO: Pod "pod-service-account-9e24ec3c-1ff4-11ea-9388-0242ac110004-ffrzz": Phase="Pending", Reason="", readiness=false. Elapsed: 12.468110832s Dec 16 11:10:22.246: INFO: Pod "pod-service-account-9e24ec3c-1ff4-11ea-9388-0242ac110004-ffrzz": Phase="Pending", Reason="", readiness=false. Elapsed: 14.830624549s Dec 16 11:10:24.260: INFO: Pod "pod-service-account-9e24ec3c-1ff4-11ea-9388-0242ac110004-ffrzz": Phase="Pending", Reason="", readiness=false. Elapsed: 16.845334074s Dec 16 11:10:26.382: INFO: Pod "pod-service-account-9e24ec3c-1ff4-11ea-9388-0242ac110004-ffrzz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.966452772s STEP: Saw pod success Dec 16 11:10:26.382: INFO: Pod "pod-service-account-9e24ec3c-1ff4-11ea-9388-0242ac110004-ffrzz" satisfied condition "success or failure" Dec 16 11:10:26.390: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-9e24ec3c-1ff4-11ea-9388-0242ac110004-ffrzz container token-test: STEP: delete the pod Dec 16 11:10:26.720: INFO: Waiting for pod pod-service-account-9e24ec3c-1ff4-11ea-9388-0242ac110004-ffrzz to disappear Dec 16 11:10:26.739: INFO: Pod pod-service-account-9e24ec3c-1ff4-11ea-9388-0242ac110004-ffrzz no longer exists STEP: Creating a pod to test consume service account root CA Dec 16 11:10:26.758: INFO: Waiting up to 5m0s for pod "pod-service-account-9e24ec3c-1ff4-11ea-9388-0242ac110004-phgln" in namespace "e2e-tests-svcaccounts-5pxx5" to be "success or failure" Dec 16 11:10:26.906: INFO: Pod "pod-service-account-9e24ec3c-1ff4-11ea-9388-0242ac110004-phgln": Phase="Pending", Reason="", readiness=false. Elapsed: 148.402134ms Dec 16 11:10:28.921: INFO: Pod "pod-service-account-9e24ec3c-1ff4-11ea-9388-0242ac110004-phgln": Phase="Pending", Reason="", readiness=false. Elapsed: 2.162992009s Dec 16 11:10:30.949: INFO: Pod "pod-service-account-9e24ec3c-1ff4-11ea-9388-0242ac110004-phgln": Phase="Pending", Reason="", readiness=false. Elapsed: 4.191572722s Dec 16 11:10:32.971: INFO: Pod "pod-service-account-9e24ec3c-1ff4-11ea-9388-0242ac110004-phgln": Phase="Pending", Reason="", readiness=false. Elapsed: 6.213010896s Dec 16 11:10:34.983: INFO: Pod "pod-service-account-9e24ec3c-1ff4-11ea-9388-0242ac110004-phgln": Phase="Pending", Reason="", readiness=false. Elapsed: 8.225177096s Dec 16 11:10:37.008: INFO: Pod "pod-service-account-9e24ec3c-1ff4-11ea-9388-0242ac110004-phgln": Phase="Pending", Reason="", readiness=false. Elapsed: 10.250627598s Dec 16 11:10:39.242: INFO: Pod "pod-service-account-9e24ec3c-1ff4-11ea-9388-0242ac110004-phgln": Phase="Pending", Reason="", readiness=false. Elapsed: 12.484165898s Dec 16 11:10:41.299: INFO: Pod "pod-service-account-9e24ec3c-1ff4-11ea-9388-0242ac110004-phgln": Phase="Pending", Reason="", readiness=false. Elapsed: 14.541354403s Dec 16 11:10:43.315: INFO: Pod "pod-service-account-9e24ec3c-1ff4-11ea-9388-0242ac110004-phgln": Phase="Pending", Reason="", readiness=false. Elapsed: 16.55717714s Dec 16 11:10:45.340: INFO: Pod "pod-service-account-9e24ec3c-1ff4-11ea-9388-0242ac110004-phgln": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.581992633s STEP: Saw pod success Dec 16 11:10:45.340: INFO: Pod "pod-service-account-9e24ec3c-1ff4-11ea-9388-0242ac110004-phgln" satisfied condition "success or failure" Dec 16 11:10:45.346: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-9e24ec3c-1ff4-11ea-9388-0242ac110004-phgln container root-ca-test: STEP: delete the pod Dec 16 11:10:46.676: INFO: Waiting for pod pod-service-account-9e24ec3c-1ff4-11ea-9388-0242ac110004-phgln to disappear Dec 16 11:10:46.882: INFO: Pod pod-service-account-9e24ec3c-1ff4-11ea-9388-0242ac110004-phgln no longer exists STEP: Creating a pod to test consume service account namespace Dec 16 11:10:46.918: INFO: Waiting up to 5m0s for pod "pod-service-account-9e24ec3c-1ff4-11ea-9388-0242ac110004-w74ck" in namespace "e2e-tests-svcaccounts-5pxx5" to be "success or failure" Dec 16 11:10:46.943: INFO: Pod "pod-service-account-9e24ec3c-1ff4-11ea-9388-0242ac110004-w74ck": Phase="Pending", Reason="", readiness=false. Elapsed: 24.941213ms Dec 16 11:10:49.229: INFO: Pod "pod-service-account-9e24ec3c-1ff4-11ea-9388-0242ac110004-w74ck": Phase="Pending", Reason="", readiness=false. Elapsed: 2.310775692s Dec 16 11:10:51.245: INFO: Pod "pod-service-account-9e24ec3c-1ff4-11ea-9388-0242ac110004-w74ck": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327210079s Dec 16 11:10:54.087: INFO: Pod "pod-service-account-9e24ec3c-1ff4-11ea-9388-0242ac110004-w74ck": Phase="Pending", Reason="", readiness=false. Elapsed: 7.169013594s Dec 16 11:10:56.134: INFO: Pod "pod-service-account-9e24ec3c-1ff4-11ea-9388-0242ac110004-w74ck": Phase="Pending", Reason="", readiness=false. Elapsed: 9.21569829s Dec 16 11:10:58.159: INFO: Pod "pod-service-account-9e24ec3c-1ff4-11ea-9388-0242ac110004-w74ck": Phase="Pending", Reason="", readiness=false. Elapsed: 11.240838514s Dec 16 11:11:00.176: INFO: Pod "pod-service-account-9e24ec3c-1ff4-11ea-9388-0242ac110004-w74ck": Phase="Pending", Reason="", readiness=false. Elapsed: 13.257667234s Dec 16 11:11:02.208: INFO: Pod "pod-service-account-9e24ec3c-1ff4-11ea-9388-0242ac110004-w74ck": Phase="Pending", Reason="", readiness=false. Elapsed: 15.290387532s Dec 16 11:11:04.223: INFO: Pod "pod-service-account-9e24ec3c-1ff4-11ea-9388-0242ac110004-w74ck": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.304745716s STEP: Saw pod success Dec 16 11:11:04.223: INFO: Pod "pod-service-account-9e24ec3c-1ff4-11ea-9388-0242ac110004-w74ck" satisfied condition "success or failure" Dec 16 11:11:04.230: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-9e24ec3c-1ff4-11ea-9388-0242ac110004-w74ck container namespace-test: STEP: delete the pod Dec 16 11:11:04.583: INFO: Waiting for pod pod-service-account-9e24ec3c-1ff4-11ea-9388-0242ac110004-w74ck to disappear Dec 16 11:11:04.597: INFO: Pod pod-service-account-9e24ec3c-1ff4-11ea-9388-0242ac110004-w74ck no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:11:04.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-5pxx5" for this suite. Dec 16 11:11:12.741: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:11:12.810: INFO: namespace: e2e-tests-svcaccounts-5pxx5, resource: bindings, ignored listing per whitelist Dec 16 11:11:12.892: INFO: namespace e2e-tests-svcaccounts-5pxx5 deletion completed in 8.28276088s • [SLOW TEST:66.240 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:11:12.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-c54773e1-1ff4-11ea-9388-0242ac110004 STEP: Creating a pod to test consume configMaps Dec 16 11:11:13.079: INFO: Waiting up to 5m0s for pod "pod-configmaps-c548ab91-1ff4-11ea-9388-0242ac110004" in namespace "e2e-tests-configmap-wn976" to be "success or failure" Dec 16 11:11:13.093: INFO: Pod "pod-configmaps-c548ab91-1ff4-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 13.62289ms Dec 16 11:11:15.117: INFO: Pod "pod-configmaps-c548ab91-1ff4-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03713955s Dec 16 11:11:17.173: INFO: Pod "pod-configmaps-c548ab91-1ff4-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094033503s Dec 16 11:11:19.244: INFO: Pod "pod-configmaps-c548ab91-1ff4-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.164887988s Dec 16 11:11:21.259: INFO: Pod "pod-configmaps-c548ab91-1ff4-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.179395783s Dec 16 11:11:23.294: INFO: Pod "pod-configmaps-c548ab91-1ff4-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.214947127s STEP: Saw pod success Dec 16 11:11:23.295: INFO: Pod "pod-configmaps-c548ab91-1ff4-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 11:11:23.299: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-c548ab91-1ff4-11ea-9388-0242ac110004 container configmap-volume-test: STEP: delete the pod Dec 16 11:11:24.358: INFO: Waiting for pod pod-configmaps-c548ab91-1ff4-11ea-9388-0242ac110004 to disappear Dec 16 11:11:24.378: INFO: Pod pod-configmaps-c548ab91-1ff4-11ea-9388-0242ac110004 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:11:24.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-wn976" for this suite. Dec 16 11:11:30.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:11:30.806: INFO: namespace: e2e-tests-configmap-wn976, resource: bindings, ignored listing per whitelist Dec 16 11:11:30.822: INFO: namespace e2e-tests-configmap-wn976 deletion completed in 6.433031566s • [SLOW TEST:17.929 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:11:30.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin Dec 16 11:11:30.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-gx2t6 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Dec 16 11:11:44.345: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n" Dec 16 11:11:44.346: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:11:47.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-gx2t6" for this suite. Dec 16 11:11:53.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:11:53.325: INFO: namespace: e2e-tests-kubectl-gx2t6, resource: bindings, ignored listing per whitelist Dec 16 11:11:53.437: INFO: namespace e2e-tests-kubectl-gx2t6 deletion completed in 6.344457484s • [SLOW TEST:22.614 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:11:53.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-dd8e596b-1ff4-11ea-9388-0242ac110004 STEP: Creating a pod to test consume configMaps Dec 16 11:11:53.839: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-dd9140a9-1ff4-11ea-9388-0242ac110004" in namespace "e2e-tests-projected-7nx5f" to be "success or failure" Dec 16 11:11:53.934: INFO: Pod "pod-projected-configmaps-dd9140a9-1ff4-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 95.373956ms Dec 16 11:11:55.954: INFO: Pod "pod-projected-configmaps-dd9140a9-1ff4-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114759475s Dec 16 11:11:57.977: INFO: Pod "pod-projected-configmaps-dd9140a9-1ff4-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137812299s Dec 16 11:11:59.991: INFO: Pod "pod-projected-configmaps-dd9140a9-1ff4-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.151895617s Dec 16 11:12:02.022: INFO: Pod "pod-projected-configmaps-dd9140a9-1ff4-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.182651459s Dec 16 11:12:04.052: INFO: Pod "pod-projected-configmaps-dd9140a9-1ff4-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.213135119s STEP: Saw pod success Dec 16 11:12:04.052: INFO: Pod "pod-projected-configmaps-dd9140a9-1ff4-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 11:12:04.075: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-dd9140a9-1ff4-11ea-9388-0242ac110004 container projected-configmap-volume-test: STEP: delete the pod Dec 16 11:12:04.472: INFO: Waiting for pod pod-projected-configmaps-dd9140a9-1ff4-11ea-9388-0242ac110004 to disappear Dec 16 11:12:04.488: INFO: Pod pod-projected-configmaps-dd9140a9-1ff4-11ea-9388-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:12:04.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-7nx5f" for this suite. Dec 16 11:12:11.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:12:11.609: INFO: namespace: e2e-tests-projected-7nx5f, resource: bindings, ignored listing per whitelist Dec 16 11:12:11.958: INFO: namespace e2e-tests-projected-7nx5f deletion completed in 7.450836477s • [SLOW TEST:18.521 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:12:11.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-l2kxg [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-l2kxg STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-l2kxg Dec 16 11:12:12.296: INFO: Found 0 stateful pods, waiting for 1 Dec 16 11:12:22.306: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Dec 16 11:12:22.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l2kxg ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 16 11:12:23.080: INFO: stderr: "" Dec 16 11:12:23.080: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 16 11:12:23.080: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 16 11:12:23.109: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 16 11:12:23.109: INFO: Waiting for statefulset status.replicas updated to 0 Dec 16 11:12:23.159: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Dec 16 11:12:33.256: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999996818s Dec 16 11:12:34.274: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.930020896s Dec 16 11:12:35.299: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.912339514s Dec 16 11:12:36.325: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.887002556s Dec 16 11:12:37.346: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.861716477s Dec 16 11:12:38.365: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.839967621s Dec 16 11:12:39.401: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.820807603s Dec 16 11:12:40.421: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.785296333s Dec 16 11:12:41.445: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.764964198s Dec 16 11:12:42.465: INFO: Verifying statefulset ss doesn't scale past 1 for another 741.520295ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-l2kxg Dec 16 11:12:43.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l2kxg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 16 11:12:44.263: INFO: stderr: "" Dec 16 11:12:44.263: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 16 11:12:44.263: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 16 11:12:44.291: INFO: Found 1 stateful pods, waiting for 3 Dec 16 11:12:54.413: INFO: Found 2 stateful pods, waiting for 3 Dec 16 11:13:04.364: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Dec 16 11:13:04.364: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Dec 16 11:13:04.364: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=false Dec 16 11:13:14.314: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Dec 16 11:13:14.315: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Dec 16 11:13:14.315: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Dec 16 11:13:14.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l2kxg ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 16 11:13:15.134: INFO: stderr: "" Dec 16 11:13:15.134: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 16 11:13:15.134: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 16 11:13:15.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l2kxg ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 16 11:13:15.844: INFO: stderr: "" Dec 16 11:13:15.844: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 16 11:13:15.844: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 16 11:13:15.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l2kxg ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 16 11:13:16.327: INFO: stderr: "" Dec 16 11:13:16.327: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 16 11:13:16.327: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 16 11:13:16.327: INFO: Waiting for statefulset status.replicas updated to 0 Dec 16 11:13:16.342: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Dec 16 11:13:26.431: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 16 11:13:26.432: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Dec 16 11:13:26.432: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Dec 16 11:13:26.586: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999994686s Dec 16 11:13:27.643: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.887232992s Dec 16 11:13:28.657: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.830810116s Dec 16 11:13:29.681: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.816327137s Dec 16 11:13:30.735: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.793063434s Dec 16 11:13:31.769: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.738463411s Dec 16 11:13:33.493: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.704403239s Dec 16 11:13:34.536: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.980122253s Dec 16 11:13:35.568: INFO: Verifying statefulset ss doesn't scale past 3 for another 937.623208ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-l2kxg Dec 16 11:13:36.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l2kxg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 16 11:13:37.320: INFO: stderr: "" Dec 16 11:13:37.321: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 16 11:13:37.321: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 16 11:13:37.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l2kxg ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 16 11:13:37.894: INFO: stderr: "" Dec 16 11:13:37.894: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 16 11:13:37.894: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 16 11:13:37.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-l2kxg ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 16 11:13:39.114: INFO: stderr: "" Dec 16 11:13:39.114: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 16 11:13:39.114: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 16 11:13:39.114: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Dec 16 11:13:59.275: INFO: Deleting all statefulset in ns e2e-tests-statefulset-l2kxg Dec 16 11:13:59.284: INFO: Scaling statefulset ss to 0 Dec 16 11:13:59.305: INFO: Waiting for statefulset status.replicas updated to 0 Dec 16 11:13:59.312: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:13:59.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-l2kxg" for this suite. Dec 16 11:14:07.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:14:07.585: INFO: namespace: e2e-tests-statefulset-l2kxg, resource: bindings, ignored listing per whitelist Dec 16 11:14:07.635: INFO: namespace e2e-tests-statefulset-l2kxg deletion completed in 8.27022533s • [SLOW TEST:115.676 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:14:07.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Dec 16 11:14:07.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-sfgqx' Dec 16 11:14:08.492: INFO: stderr: "" Dec 16 11:14:08.492: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 16 11:14:08.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sfgqx' Dec 16 11:14:08.769: INFO: stderr: "" Dec 16 11:14:08.770: INFO: stdout: "update-demo-nautilus-j54kc update-demo-nautilus-n58lc " Dec 16 11:14:08.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j54kc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sfgqx' Dec 16 11:14:08.978: INFO: stderr: "" Dec 16 11:14:08.978: INFO: stdout: "" Dec 16 11:14:08.978: INFO: update-demo-nautilus-j54kc is created but not running Dec 16 11:14:13.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sfgqx' Dec 16 11:14:14.155: INFO: stderr: "" Dec 16 11:14:14.155: INFO: stdout: "update-demo-nautilus-j54kc update-demo-nautilus-n58lc " Dec 16 11:14:14.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j54kc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sfgqx' Dec 16 11:14:14.279: INFO: stderr: "" Dec 16 11:14:14.279: INFO: stdout: "" Dec 16 11:14:14.279: INFO: update-demo-nautilus-j54kc is created but not running Dec 16 11:14:19.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sfgqx' Dec 16 11:14:19.460: INFO: stderr: "" Dec 16 11:14:19.460: INFO: stdout: "update-demo-nautilus-j54kc update-demo-nautilus-n58lc " Dec 16 11:14:19.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j54kc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sfgqx' Dec 16 11:14:19.563: INFO: stderr: "" Dec 16 11:14:19.563: INFO: stdout: "true" Dec 16 11:14:19.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j54kc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sfgqx' Dec 16 11:14:19.684: INFO: stderr: "" Dec 16 11:14:19.684: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 16 11:14:19.684: INFO: validating pod update-demo-nautilus-j54kc Dec 16 11:14:19.715: INFO: got data: { "image": "nautilus.jpg" } Dec 16 11:14:19.715: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 16 11:14:19.715: INFO: update-demo-nautilus-j54kc is verified up and running Dec 16 11:14:19.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n58lc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sfgqx' Dec 16 11:14:19.889: INFO: stderr: "" Dec 16 11:14:19.889: INFO: stdout: "" Dec 16 11:14:19.889: INFO: update-demo-nautilus-n58lc is created but not running Dec 16 11:14:24.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sfgqx' Dec 16 11:14:25.350: INFO: stderr: "" Dec 16 11:14:25.350: INFO: stdout: "update-demo-nautilus-j54kc update-demo-nautilus-n58lc " Dec 16 11:14:25.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j54kc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sfgqx' Dec 16 11:14:25.530: INFO: stderr: "" Dec 16 11:14:25.530: INFO: stdout: "true" Dec 16 11:14:25.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j54kc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sfgqx' Dec 16 11:14:25.668: INFO: stderr: "" Dec 16 11:14:25.668: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 16 11:14:25.668: INFO: validating pod update-demo-nautilus-j54kc Dec 16 11:14:25.782: INFO: got data: { "image": "nautilus.jpg" } Dec 16 11:14:25.783: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 16 11:14:25.783: INFO: update-demo-nautilus-j54kc is verified up and running Dec 16 11:14:25.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n58lc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sfgqx' Dec 16 11:14:25.904: INFO: stderr: "" Dec 16 11:14:25.904: INFO: stdout: "true" Dec 16 11:14:25.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n58lc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sfgqx' Dec 16 11:14:26.019: INFO: stderr: "" Dec 16 11:14:26.019: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 16 11:14:26.019: INFO: validating pod update-demo-nautilus-n58lc Dec 16 11:14:26.027: INFO: got data: { "image": "nautilus.jpg" } Dec 16 11:14:26.028: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 16 11:14:26.028: INFO: update-demo-nautilus-n58lc is verified up and running STEP: scaling down the replication controller Dec 16 11:14:26.030: INFO: scanned /root for discovery docs: Dec 16 11:14:26.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-sfgqx' Dec 16 11:14:28.076: INFO: stderr: "" Dec 16 11:14:28.076: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 16 11:14:28.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sfgqx' Dec 16 11:14:28.294: INFO: stderr: "" Dec 16 11:14:28.294: INFO: stdout: "update-demo-nautilus-j54kc update-demo-nautilus-n58lc " STEP: Replicas for name=update-demo: expected=1 actual=2 Dec 16 11:14:33.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sfgqx' Dec 16 11:14:33.513: INFO: stderr: "" Dec 16 11:14:33.513: INFO: stdout: "update-demo-nautilus-j54kc update-demo-nautilus-n58lc " STEP: Replicas for name=update-demo: expected=1 actual=2 Dec 16 11:14:38.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sfgqx' Dec 16 11:14:38.685: INFO: stderr: "" Dec 16 11:14:38.685: INFO: stdout: "update-demo-nautilus-j54kc update-demo-nautilus-n58lc " STEP: Replicas for name=update-demo: expected=1 actual=2 Dec 16 11:14:43.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sfgqx' Dec 16 11:14:43.971: INFO: stderr: "" Dec 16 11:14:43.971: INFO: stdout: "update-demo-nautilus-j54kc " Dec 16 11:14:43.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j54kc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sfgqx' Dec 16 11:14:44.208: INFO: stderr: "" Dec 16 11:14:44.208: INFO: stdout: "true" Dec 16 11:14:44.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j54kc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sfgqx' Dec 16 11:14:44.323: INFO: stderr: "" Dec 16 11:14:44.323: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 16 11:14:44.323: INFO: validating pod update-demo-nautilus-j54kc Dec 16 11:14:44.333: INFO: got data: { "image": "nautilus.jpg" } Dec 16 11:14:44.333: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 16 11:14:44.333: INFO: update-demo-nautilus-j54kc is verified up and running STEP: scaling up the replication controller Dec 16 11:14:44.335: INFO: scanned /root for discovery docs: Dec 16 11:14:44.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-sfgqx' Dec 16 11:14:45.966: INFO: stderr: "" Dec 16 11:14:45.966: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 16 11:14:45.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sfgqx' Dec 16 11:14:46.160: INFO: stderr: "" Dec 16 11:14:46.161: INFO: stdout: "update-demo-nautilus-j54kc update-demo-nautilus-kkw6q " Dec 16 11:14:46.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j54kc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sfgqx' Dec 16 11:14:46.462: INFO: stderr: "" Dec 16 11:14:46.463: INFO: stdout: "true" Dec 16 11:14:46.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j54kc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sfgqx' Dec 16 11:14:46.930: INFO: stderr: "" Dec 16 11:14:46.930: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 16 11:14:46.930: INFO: validating pod update-demo-nautilus-j54kc Dec 16 11:14:46.951: INFO: got data: { "image": "nautilus.jpg" } Dec 16 11:14:46.952: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 16 11:14:46.952: INFO: update-demo-nautilus-j54kc is verified up and running Dec 16 11:14:46.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kkw6q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sfgqx' Dec 16 11:14:47.118: INFO: stderr: "" Dec 16 11:14:47.118: INFO: stdout: "" Dec 16 11:14:47.118: INFO: update-demo-nautilus-kkw6q is created but not running Dec 16 11:14:52.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sfgqx' Dec 16 11:14:52.310: INFO: stderr: "" Dec 16 11:14:52.310: INFO: stdout: "update-demo-nautilus-j54kc update-demo-nautilus-kkw6q " Dec 16 11:14:52.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j54kc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sfgqx' Dec 16 11:14:52.475: INFO: stderr: "" Dec 16 11:14:52.475: INFO: stdout: "true" Dec 16 11:14:52.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j54kc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sfgqx' Dec 16 11:14:52.795: INFO: stderr: "" Dec 16 11:14:52.796: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 16 11:14:52.796: INFO: validating pod update-demo-nautilus-j54kc Dec 16 11:14:52.829: INFO: got data: { "image": "nautilus.jpg" } Dec 16 11:14:52.830: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 16 11:14:52.830: INFO: update-demo-nautilus-j54kc is verified up and running Dec 16 11:14:52.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kkw6q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sfgqx' Dec 16 11:14:52.961: INFO: stderr: "" Dec 16 11:14:52.961: INFO: stdout: "" Dec 16 11:14:52.961: INFO: update-demo-nautilus-kkw6q is created but not running Dec 16 11:14:57.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sfgqx' Dec 16 11:14:58.153: INFO: stderr: "" Dec 16 11:14:58.154: INFO: stdout: "update-demo-nautilus-j54kc update-demo-nautilus-kkw6q " Dec 16 11:14:58.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j54kc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sfgqx' Dec 16 11:14:58.271: INFO: stderr: "" Dec 16 11:14:58.272: INFO: stdout: "true" Dec 16 11:14:58.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j54kc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sfgqx' Dec 16 11:14:58.417: INFO: stderr: "" Dec 16 11:14:58.417: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 16 11:14:58.417: INFO: validating pod update-demo-nautilus-j54kc Dec 16 11:14:58.432: INFO: got data: { "image": "nautilus.jpg" } Dec 16 11:14:58.432: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 16 11:14:58.432: INFO: update-demo-nautilus-j54kc is verified up and running Dec 16 11:14:58.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kkw6q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sfgqx' Dec 16 11:14:58.615: INFO: stderr: "" Dec 16 11:14:58.615: INFO: stdout: "true" Dec 16 11:14:58.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kkw6q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sfgqx' Dec 16 11:14:58.743: INFO: stderr: "" Dec 16 11:14:58.743: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 16 11:14:58.743: INFO: validating pod update-demo-nautilus-kkw6q Dec 16 11:14:58.760: INFO: got data: { "image": "nautilus.jpg" } Dec 16 11:14:58.760: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 16 11:14:58.760: INFO: update-demo-nautilus-kkw6q is verified up and running STEP: using delete to clean up resources Dec 16 11:14:58.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-sfgqx' Dec 16 11:14:58.907: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 16 11:14:58.908: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Dec 16 11:14:58.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-sfgqx' Dec 16 11:14:59.113: INFO: stderr: "No resources found.\n" Dec 16 11:14:59.113: INFO: stdout: "" Dec 16 11:14:59.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-sfgqx -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 16 11:14:59.234: INFO: stderr: "" Dec 16 11:14:59.234: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:14:59.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-sfgqx" for this suite. Dec 16 11:15:23.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:15:23.490: INFO: namespace: e2e-tests-kubectl-sfgqx, resource: bindings, ignored listing per whitelist Dec 16 11:15:23.498: INFO: namespace e2e-tests-kubectl-sfgqx deletion completed in 24.250784214s • [SLOW TEST:75.863 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:15:23.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W1216 11:15:54.774798 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 16 11:15:54.774: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:15:54.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-wkwrm" for this suite. Dec 16 11:16:05.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:16:05.948: INFO: namespace: e2e-tests-gc-wkwrm, resource: bindings, ignored listing per whitelist Dec 16 11:16:06.110: INFO: namespace e2e-tests-gc-wkwrm deletion completed in 11.32855361s • [SLOW TEST:42.611 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:16:06.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 16 11:16:06.715: INFO: Waiting up to 5m0s for pod "downwardapi-volume-744fd1d1-1ff5-11ea-9388-0242ac110004" in namespace "e2e-tests-downward-api-8zxqg" to be "success or failure" Dec 16 11:16:06.761: INFO: Pod "downwardapi-volume-744fd1d1-1ff5-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 45.336872ms Dec 16 11:16:08.770: INFO: Pod "downwardapi-volume-744fd1d1-1ff5-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054237628s Dec 16 11:16:10.814: INFO: Pod "downwardapi-volume-744fd1d1-1ff5-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098820174s Dec 16 11:16:13.574: INFO: Pod "downwardapi-volume-744fd1d1-1ff5-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.858696274s Dec 16 11:16:15.591: INFO: Pod "downwardapi-volume-744fd1d1-1ff5-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.875561521s Dec 16 11:16:17.611: INFO: Pod "downwardapi-volume-744fd1d1-1ff5-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.895940408s STEP: Saw pod success Dec 16 11:16:17.612: INFO: Pod "downwardapi-volume-744fd1d1-1ff5-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 11:16:17.618: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-744fd1d1-1ff5-11ea-9388-0242ac110004 container client-container: STEP: delete the pod Dec 16 11:16:18.134: INFO: Waiting for pod downwardapi-volume-744fd1d1-1ff5-11ea-9388-0242ac110004 to disappear Dec 16 11:16:18.169: INFO: Pod downwardapi-volume-744fd1d1-1ff5-11ea-9388-0242ac110004 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:16:18.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-8zxqg" for this suite. Dec 16 11:16:24.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:16:24.386: INFO: namespace: e2e-tests-downward-api-8zxqg, resource: bindings, ignored listing per whitelist Dec 16 11:16:24.563: INFO: namespace e2e-tests-downward-api-8zxqg deletion completed in 6.331101704s • [SLOW TEST:18.453 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:16:24.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args Dec 16 11:16:24.829: INFO: Waiting up to 5m0s for pod "var-expansion-7f1b63c8-1ff5-11ea-9388-0242ac110004" in namespace "e2e-tests-var-expansion-d9p7z" to be "success or failure" Dec 16 11:16:24.925: INFO: Pod "var-expansion-7f1b63c8-1ff5-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 96.54753ms Dec 16 11:16:27.231: INFO: Pod "var-expansion-7f1b63c8-1ff5-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.401938086s Dec 16 11:16:29.245: INFO: Pod "var-expansion-7f1b63c8-1ff5-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.416712669s Dec 16 11:16:31.572: INFO: Pod "var-expansion-7f1b63c8-1ff5-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.743296721s Dec 16 11:16:33.597: INFO: Pod "var-expansion-7f1b63c8-1ff5-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.768433725s Dec 16 11:16:35.621: INFO: Pod "var-expansion-7f1b63c8-1ff5-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.791837833s Dec 16 11:16:37.646: INFO: Pod "var-expansion-7f1b63c8-1ff5-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.817436364s STEP: Saw pod success Dec 16 11:16:37.646: INFO: Pod "var-expansion-7f1b63c8-1ff5-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 11:16:37.655: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-7f1b63c8-1ff5-11ea-9388-0242ac110004 container dapi-container: STEP: delete the pod Dec 16 11:16:37.776: INFO: Waiting for pod var-expansion-7f1b63c8-1ff5-11ea-9388-0242ac110004 to disappear Dec 16 11:16:37.797: INFO: Pod var-expansion-7f1b63c8-1ff5-11ea-9388-0242ac110004 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:16:37.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-d9p7z" for this suite. Dec 16 11:16:44.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:16:44.171: INFO: namespace: e2e-tests-var-expansion-d9p7z, resource: bindings, ignored listing per whitelist Dec 16 11:16:44.176: INFO: namespace e2e-tests-var-expansion-d9p7z deletion completed in 6.268355292s • [SLOW TEST:19.612 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:16:44.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-8ac3b6c5-1ff5-11ea-9388-0242ac110004 STEP: Creating a pod to test consume secrets Dec 16 11:16:44.403: INFO: Waiting up to 5m0s for pod "pod-secrets-8ac4c962-1ff5-11ea-9388-0242ac110004" in namespace "e2e-tests-secrets-dgr2g" to be "success or failure" Dec 16 11:16:44.412: INFO: Pod "pod-secrets-8ac4c962-1ff5-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.368553ms Dec 16 11:16:46.430: INFO: Pod "pod-secrets-8ac4c962-1ff5-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027461507s Dec 16 11:16:48.470: INFO: Pod "pod-secrets-8ac4c962-1ff5-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066943485s Dec 16 11:16:50.512: INFO: Pod "pod-secrets-8ac4c962-1ff5-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108741357s Dec 16 11:16:52.556: INFO: Pod "pod-secrets-8ac4c962-1ff5-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.152873634s Dec 16 11:16:54.593: INFO: Pod "pod-secrets-8ac4c962-1ff5-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.189981324s STEP: Saw pod success Dec 16 11:16:54.593: INFO: Pod "pod-secrets-8ac4c962-1ff5-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 11:16:54.617: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-8ac4c962-1ff5-11ea-9388-0242ac110004 container secret-volume-test: STEP: delete the pod Dec 16 11:16:54.741: INFO: Waiting for pod pod-secrets-8ac4c962-1ff5-11ea-9388-0242ac110004 to disappear Dec 16 11:16:54.757: INFO: Pod pod-secrets-8ac4c962-1ff5-11ea-9388-0242ac110004 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:16:54.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-dgr2g" for this suite. Dec 16 11:17:00.795: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:17:00.869: INFO: namespace: e2e-tests-secrets-dgr2g, resource: bindings, ignored listing per whitelist Dec 16 11:17:01.017: INFO: namespace e2e-tests-secrets-dgr2g deletion completed in 6.250762424s • [SLOW TEST:16.840 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:17:01.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 16 11:17:01.273: INFO: Pod name rollover-pod: Found 0 pods out of 1 Dec 16 11:17:06.980: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Dec 16 11:17:10.999: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Dec 16 11:17:13.006: INFO: Creating deployment "test-rollover-deployment" Dec 16 11:17:13.031: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Dec 16 11:17:15.064: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Dec 16 11:17:15.077: INFO: Ensure that both replica sets have 1 created replica Dec 16 11:17:15.085: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Dec 16 11:17:15.108: INFO: Updating deployment test-rollover-deployment Dec 16 11:17:15.108: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Dec 16 11:17:17.173: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Dec 16 11:17:17.187: INFO: Make sure deployment "test-rollover-deployment" is complete Dec 16 11:17:17.193: INFO: all replica sets need to contain the pod-template-hash label Dec 16 11:17:17.194: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712091833, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712091833, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712091836, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712091833, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 16 11:17:19.217: INFO: all replica sets need to contain the pod-template-hash label Dec 16 11:17:19.217: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712091833, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712091833, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712091836, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712091833, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 16 11:17:21.226: INFO: all replica sets need to contain the pod-template-hash label Dec 16 11:17:21.227: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712091833, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712091833, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712091836, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712091833, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 16 11:17:23.535: INFO: all replica sets need to contain the pod-template-hash label Dec 16 11:17:23.535: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712091833, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712091833, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712091836, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712091833, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 16 11:17:25.260: INFO: all replica sets need to contain the pod-template-hash label Dec 16 11:17:25.260: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712091833, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712091833, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712091836, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712091833, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 16 11:17:27.253: INFO: all replica sets need to contain the pod-template-hash label Dec 16 11:17:27.254: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712091833, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712091833, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712091846, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712091833, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 16 11:17:29.219: INFO: all replica sets need to contain the pod-template-hash label Dec 16 11:17:29.219: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712091833, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712091833, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712091846, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712091833, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 16 11:17:31.215: INFO: all replica sets need to contain the pod-template-hash label Dec 16 11:17:31.216: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712091833, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712091833, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712091846, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712091833, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 16 11:17:33.220: INFO: all replica sets need to contain the pod-template-hash label Dec 16 11:17:33.220: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712091833, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712091833, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712091846, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712091833, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 16 11:17:35.214: INFO: all replica sets need to contain the pod-template-hash label Dec 16 11:17:35.214: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712091833, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712091833, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712091846, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712091833, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 16 11:17:37.224: INFO: Dec 16 11:17:37.225: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Dec 16 11:17:37.259: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-nrdjb,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nrdjb/deployments/test-rollover-deployment,UID:9bd48d50-1ff5-11ea-a994-fa163e34d433,ResourceVersion:15003576,Generation:2,CreationTimestamp:2019-12-16 11:17:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-16 11:17:13 +0000 UTC 2019-12-16 11:17:13 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-16 11:17:37 +0000 UTC 2019-12-16 11:17:13 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Dec 16 11:17:37.266: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-nrdjb,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nrdjb/replicasets/test-rollover-deployment-5b8479fdb6,UID:9d15dd03-1ff5-11ea-a994-fa163e34d433,ResourceVersion:15003566,Generation:2,CreationTimestamp:2019-12-16 11:17:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 9bd48d50-1ff5-11ea-a994-fa163e34d433 0xc002055977 0xc002055978}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Dec 16 11:17:37.266: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Dec 16 11:17:37.266: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-nrdjb,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nrdjb/replicasets/test-rollover-controller,UID:94c3725e-1ff5-11ea-a994-fa163e34d433,ResourceVersion:15003575,Generation:2,CreationTimestamp:2019-12-16 11:17:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 9bd48d50-1ff5-11ea-a994-fa163e34d433 0xc0020557d7 0xc0020557d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 16 11:17:37.266: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-nrdjb,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nrdjb/replicasets/test-rollover-deployment-58494b7559,UID:9be6d96a-1ff5-11ea-a994-fa163e34d433,ResourceVersion:15003528,Generation:2,CreationTimestamp:2019-12-16 11:17:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 9bd48d50-1ff5-11ea-a994-fa163e34d433 0xc0020558a7 0xc0020558a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 16 11:17:37.272: INFO: Pod "test-rollover-deployment-5b8479fdb6-59j7t" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-59j7t,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-nrdjb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nrdjb/pods/test-rollover-deployment-5b8479fdb6-59j7t,UID:9dda22e2-1ff5-11ea-a994-fa163e34d433,ResourceVersion:15003551,Generation:0,CreationTimestamp:2019-12-16 11:17:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 9d15dd03-1ff5-11ea-a994-fa163e34d433 0xc002248397 0xc002248398}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-t47wz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t47wz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-t47wz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002248400} {node.kubernetes.io/unreachable Exists NoExecute 0xc002248420}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:17:16 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:17:26 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:17:26 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:17:16 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2019-12-16 11:17:16 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-16 11:17:25 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://a475bea3d34454e40b401869f0404bbb49a48bf7d7434dbbc122ea50a29194b8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:17:37.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-nrdjb" for this suite. Dec 16 11:17:47.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:17:47.467: INFO: namespace: e2e-tests-deployment-nrdjb, resource: bindings, ignored listing per whitelist Dec 16 11:17:47.540: INFO: namespace e2e-tests-deployment-nrdjb deletion completed in 10.262625007s • [SLOW TEST:46.522 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:17:47.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:17:56.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-47dkh" for this suite. Dec 16 11:18:02.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:18:02.186: INFO: namespace: e2e-tests-emptydir-wrapper-47dkh, resource: bindings, ignored listing per whitelist Dec 16 11:18:02.280: INFO: namespace e2e-tests-emptydir-wrapper-47dkh deletion completed in 6.217045269s • [SLOW TEST:14.740 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:18:02.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults Dec 16 11:18:02.707: INFO: Waiting up to 5m0s for pod "client-containers-b972065d-1ff5-11ea-9388-0242ac110004" in namespace "e2e-tests-containers-d442x" to be "success or failure" Dec 16 11:18:02.715: INFO: Pod "client-containers-b972065d-1ff5-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.397384ms Dec 16 11:18:05.026: INFO: Pod "client-containers-b972065d-1ff5-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.318861643s Dec 16 11:18:07.038: INFO: Pod "client-containers-b972065d-1ff5-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.33105749s Dec 16 11:18:09.162: INFO: Pod "client-containers-b972065d-1ff5-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.454816368s Dec 16 11:18:11.210: INFO: Pod "client-containers-b972065d-1ff5-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.502918623s Dec 16 11:18:13.219: INFO: Pod "client-containers-b972065d-1ff5-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.511877403s STEP: Saw pod success Dec 16 11:18:13.219: INFO: Pod "client-containers-b972065d-1ff5-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 11:18:13.222: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-b972065d-1ff5-11ea-9388-0242ac110004 container test-container: STEP: delete the pod Dec 16 11:18:13.317: INFO: Waiting for pod client-containers-b972065d-1ff5-11ea-9388-0242ac110004 to disappear Dec 16 11:18:13.419: INFO: Pod client-containers-b972065d-1ff5-11ea-9388-0242ac110004 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:18:13.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-d442x" for this suite. Dec 16 11:18:20.467: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:18:20.654: INFO: namespace: e2e-tests-containers-d442x, resource: bindings, ignored listing per whitelist Dec 16 11:18:20.725: INFO: namespace e2e-tests-containers-d442x deletion completed in 7.296358081s • [SLOW TEST:18.445 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:18:20.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 16 11:18:21.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Dec 16 11:18:21.203: INFO: stderr: "" Dec 16 11:18:21.203: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-14T21:37:42Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:18:21.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-zrs58" for this suite. Dec 16 11:18:29.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:18:29.559: INFO: namespace: e2e-tests-kubectl-zrs58, resource: bindings, ignored listing per whitelist Dec 16 11:18:29.579: INFO: namespace e2e-tests-kubectl-zrs58 deletion completed in 8.360878189s • [SLOW TEST:8.853 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:18:29.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 16 11:18:29.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-s4g6m' Dec 16 11:18:29.993: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 16 11:18:29.993: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Dec 16 11:18:29.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-s4g6m' Dec 16 11:18:30.264: INFO: stderr: "" Dec 16 11:18:30.264: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:18:30.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-s4g6m" for this suite. Dec 16 11:18:38.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:18:38.643: INFO: namespace: e2e-tests-kubectl-s4g6m, resource: bindings, ignored listing per whitelist Dec 16 11:18:38.667: INFO: namespace e2e-tests-kubectl-s4g6m deletion completed in 8.333855569s • [SLOW TEST:9.088 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:18:38.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 16 11:18:49.154: INFO: Waiting up to 5m0s for pod "client-envvars-d51327c1-1ff5-11ea-9388-0242ac110004" in namespace "e2e-tests-pods-r6r6w" to be "success or failure" Dec 16 11:18:49.166: INFO: Pod "client-envvars-d51327c1-1ff5-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.042495ms Dec 16 11:18:51.182: INFO: Pod "client-envvars-d51327c1-1ff5-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026774415s Dec 16 11:18:53.200: INFO: Pod "client-envvars-d51327c1-1ff5-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044905654s Dec 16 11:18:55.218: INFO: Pod "client-envvars-d51327c1-1ff5-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063234197s Dec 16 11:18:57.231: INFO: Pod "client-envvars-d51327c1-1ff5-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.076068425s Dec 16 11:18:59.269: INFO: Pod "client-envvars-d51327c1-1ff5-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.114620922s STEP: Saw pod success Dec 16 11:18:59.270: INFO: Pod "client-envvars-d51327c1-1ff5-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 11:18:59.303: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-d51327c1-1ff5-11ea-9388-0242ac110004 container env3cont: STEP: delete the pod Dec 16 11:18:59.533: INFO: Waiting for pod client-envvars-d51327c1-1ff5-11ea-9388-0242ac110004 to disappear Dec 16 11:18:59.544: INFO: Pod client-envvars-d51327c1-1ff5-11ea-9388-0242ac110004 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:18:59.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-r6r6w" for this suite. Dec 16 11:19:43.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:19:43.756: INFO: namespace: e2e-tests-pods-r6r6w, resource: bindings, ignored listing per whitelist Dec 16 11:19:43.963: INFO: namespace e2e-tests-pods-r6r6w deletion completed in 44.407184972s • [SLOW TEST:65.296 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:19:43.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Dec 16 11:19:44.378: INFO: Waiting up to 5m0s for pod "pod-f60a51b9-1ff5-11ea-9388-0242ac110004" in namespace "e2e-tests-emptydir-t9xhf" to be "success or failure" Dec 16 11:19:44.489: INFO: Pod "pod-f60a51b9-1ff5-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 111.021045ms Dec 16 11:19:46.518: INFO: Pod "pod-f60a51b9-1ff5-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.140146011s Dec 16 11:19:48.544: INFO: Pod "pod-f60a51b9-1ff5-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.165657301s Dec 16 11:19:50.579: INFO: Pod "pod-f60a51b9-1ff5-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.200331134s Dec 16 11:19:52.779: INFO: Pod "pod-f60a51b9-1ff5-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.400323959s Dec 16 11:19:54.804: INFO: Pod "pod-f60a51b9-1ff5-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.426058762s STEP: Saw pod success Dec 16 11:19:54.805: INFO: Pod "pod-f60a51b9-1ff5-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 11:19:54.812: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-f60a51b9-1ff5-11ea-9388-0242ac110004 container test-container: STEP: delete the pod Dec 16 11:19:55.004: INFO: Waiting for pod pod-f60a51b9-1ff5-11ea-9388-0242ac110004 to disappear Dec 16 11:19:55.052: INFO: Pod pod-f60a51b9-1ff5-11ea-9388-0242ac110004 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:19:55.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-t9xhf" for this suite. Dec 16 11:20:01.234: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:20:01.322: INFO: namespace: e2e-tests-emptydir-t9xhf, resource: bindings, ignored listing per whitelist Dec 16 11:20:01.390: INFO: namespace e2e-tests-emptydir-t9xhf deletion completed in 6.332150469s • [SLOW TEST:17.427 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:20:01.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 16 11:20:01.659: INFO: Waiting up to 5m0s for pod "downwardapi-volume-005882a8-1ff6-11ea-9388-0242ac110004" in namespace "e2e-tests-downward-api-nr7ww" to be "success or failure" Dec 16 11:20:01.673: INFO: Pod "downwardapi-volume-005882a8-1ff6-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 13.734898ms Dec 16 11:20:04.236: INFO: Pod "downwardapi-volume-005882a8-1ff6-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.577001839s Dec 16 11:20:06.269: INFO: Pod "downwardapi-volume-005882a8-1ff6-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.609681876s Dec 16 11:20:08.741: INFO: Pod "downwardapi-volume-005882a8-1ff6-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.082005264s Dec 16 11:20:10.779: INFO: Pod "downwardapi-volume-005882a8-1ff6-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.119135551s Dec 16 11:20:12.793: INFO: Pod "downwardapi-volume-005882a8-1ff6-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.133202502s STEP: Saw pod success Dec 16 11:20:12.793: INFO: Pod "downwardapi-volume-005882a8-1ff6-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 11:20:12.801: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-005882a8-1ff6-11ea-9388-0242ac110004 container client-container: STEP: delete the pod Dec 16 11:20:13.884: INFO: Waiting for pod downwardapi-volume-005882a8-1ff6-11ea-9388-0242ac110004 to disappear Dec 16 11:20:14.079: INFO: Pod downwardapi-volume-005882a8-1ff6-11ea-9388-0242ac110004 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:20:14.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-nr7ww" for this suite. Dec 16 11:20:20.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:20:20.160: INFO: namespace: e2e-tests-downward-api-nr7ww, resource: bindings, ignored listing per whitelist Dec 16 11:20:20.553: INFO: namespace e2e-tests-downward-api-nr7ww deletion completed in 6.462588189s • [SLOW TEST:19.162 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:20:20.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-0bc2615f-1ff6-11ea-9388-0242ac110004 STEP: Creating a pod to test consume secrets Dec 16 11:20:20.825: INFO: Waiting up to 5m0s for pod "pod-secrets-0bc41f48-1ff6-11ea-9388-0242ac110004" in namespace "e2e-tests-secrets-86pn2" to be "success or failure" Dec 16 11:20:21.017: INFO: Pod "pod-secrets-0bc41f48-1ff6-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 191.60586ms Dec 16 11:20:23.033: INFO: Pod "pod-secrets-0bc41f48-1ff6-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207294744s Dec 16 11:20:25.063: INFO: Pod "pod-secrets-0bc41f48-1ff6-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.23788513s Dec 16 11:20:28.181: INFO: Pod "pod-secrets-0bc41f48-1ff6-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.355939757s Dec 16 11:20:30.205: INFO: Pod "pod-secrets-0bc41f48-1ff6-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.379639084s Dec 16 11:20:32.246: INFO: Pod "pod-secrets-0bc41f48-1ff6-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.420514188s STEP: Saw pod success Dec 16 11:20:32.246: INFO: Pod "pod-secrets-0bc41f48-1ff6-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 11:20:32.260: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-0bc41f48-1ff6-11ea-9388-0242ac110004 container secret-volume-test: STEP: delete the pod Dec 16 11:20:32.499: INFO: Waiting for pod pod-secrets-0bc41f48-1ff6-11ea-9388-0242ac110004 to disappear Dec 16 11:20:32.510: INFO: Pod pod-secrets-0bc41f48-1ff6-11ea-9388-0242ac110004 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:20:32.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-86pn2" for this suite. Dec 16 11:20:39.300: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:20:39.647: INFO: namespace: e2e-tests-secrets-86pn2, resource: bindings, ignored listing per whitelist Dec 16 11:20:39.647: INFO: namespace e2e-tests-secrets-86pn2 deletion completed in 7.124713647s • [SLOW TEST:19.094 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:20:39.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-985sq Dec 16 11:20:50.137: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-985sq STEP: checking the pod's current state and verifying that restartCount is present Dec 16 11:20:50.142: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:24:51.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-985sq" for this suite. Dec 16 11:24:59.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:24:59.882: INFO: namespace: e2e-tests-container-probe-985sq, resource: bindings, ignored listing per whitelist Dec 16 11:24:59.954: INFO: namespace e2e-tests-container-probe-985sq deletion completed in 8.387183332s • [SLOW TEST:260.307 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:24:59.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 16 11:25:00.369: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b25cc840-1ff6-11ea-9388-0242ac110004" in namespace "e2e-tests-projected-5jr5x" to be "success or failure" Dec 16 11:25:00.433: INFO: Pod "downwardapi-volume-b25cc840-1ff6-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 64.066973ms Dec 16 11:25:02.607: INFO: Pod "downwardapi-volume-b25cc840-1ff6-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.238360438s Dec 16 11:25:04.652: INFO: Pod "downwardapi-volume-b25cc840-1ff6-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.282707645s Dec 16 11:25:07.276: INFO: Pod "downwardapi-volume-b25cc840-1ff6-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.907091112s Dec 16 11:25:09.294: INFO: Pod "downwardapi-volume-b25cc840-1ff6-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.924820742s Dec 16 11:25:11.308: INFO: Pod "downwardapi-volume-b25cc840-1ff6-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.938921789s STEP: Saw pod success Dec 16 11:25:11.308: INFO: Pod "downwardapi-volume-b25cc840-1ff6-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 11:25:11.312: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-b25cc840-1ff6-11ea-9388-0242ac110004 container client-container: STEP: delete the pod Dec 16 11:25:11.399: INFO: Waiting for pod downwardapi-volume-b25cc840-1ff6-11ea-9388-0242ac110004 to disappear Dec 16 11:25:11.405: INFO: Pod downwardapi-volume-b25cc840-1ff6-11ea-9388-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:25:11.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-5jr5x" for this suite. Dec 16 11:25:17.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:25:17.599: INFO: namespace: e2e-tests-projected-5jr5x, resource: bindings, ignored listing per whitelist Dec 16 11:25:17.746: INFO: namespace e2e-tests-projected-5jr5x deletion completed in 6.337662297s • [SLOW TEST:17.791 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:25:17.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 16 11:25:17.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-4gfk9' Dec 16 11:25:19.941: INFO: stderr: "" Dec 16 11:25:19.941: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 Dec 16 11:25:20.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-4gfk9' Dec 16 11:25:22.712: INFO: stderr: "" Dec 16 11:25:22.712: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:25:22.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-4gfk9" for this suite. Dec 16 11:25:28.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:25:28.970: INFO: namespace: e2e-tests-kubectl-4gfk9, resource: bindings, ignored listing per whitelist Dec 16 11:25:29.089: INFO: namespace e2e-tests-kubectl-4gfk9 deletion completed in 6.353245285s • [SLOW TEST:11.342 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:25:29.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Dec 16 11:25:39.884: INFO: Successfully updated pod "pod-update-activedeadlineseconds-c395862c-1ff6-11ea-9388-0242ac110004" Dec 16 11:25:39.884: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-c395862c-1ff6-11ea-9388-0242ac110004" in namespace "e2e-tests-pods-fpg7j" to be "terminated due to deadline exceeded" Dec 16 11:25:39.940: INFO: Pod "pod-update-activedeadlineseconds-c395862c-1ff6-11ea-9388-0242ac110004": Phase="Running", Reason="", readiness=true. Elapsed: 55.139948ms Dec 16 11:25:41.966: INFO: Pod "pod-update-activedeadlineseconds-c395862c-1ff6-11ea-9388-0242ac110004": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.081744816s Dec 16 11:25:41.967: INFO: Pod "pod-update-activedeadlineseconds-c395862c-1ff6-11ea-9388-0242ac110004" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:25:41.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-fpg7j" for this suite. Dec 16 11:25:48.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:25:48.405: INFO: namespace: e2e-tests-pods-fpg7j, resource: bindings, ignored listing per whitelist Dec 16 11:25:48.428: INFO: namespace e2e-tests-pods-fpg7j deletion completed in 6.307194804s • [SLOW TEST:19.339 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:25:48.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-cf5198b0-1ff6-11ea-9388-0242ac110004 STEP: Creating configMap with name cm-test-opt-upd-cf519c52-1ff6-11ea-9388-0242ac110004 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-cf5198b0-1ff6-11ea-9388-0242ac110004 STEP: Updating configmap cm-test-opt-upd-cf519c52-1ff6-11ea-9388-0242ac110004 STEP: Creating configMap with name cm-test-opt-create-cf519cc2-1ff6-11ea-9388-0242ac110004 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:26:07.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-n4mjc" for this suite. Dec 16 11:26:31.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:26:32.347: INFO: namespace: e2e-tests-projected-n4mjc, resource: bindings, ignored listing per whitelist Dec 16 11:26:32.358: INFO: namespace e2e-tests-projected-n4mjc deletion completed in 24.986013913s • [SLOW TEST:43.929 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:26:32.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-e9673bd6-1ff6-11ea-9388-0242ac110004 Dec 16 11:26:32.677: INFO: Pod name my-hostname-basic-e9673bd6-1ff6-11ea-9388-0242ac110004: Found 0 pods out of 1 Dec 16 11:26:37.692: INFO: Pod name my-hostname-basic-e9673bd6-1ff6-11ea-9388-0242ac110004: Found 1 pods out of 1 Dec 16 11:26:37.692: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-e9673bd6-1ff6-11ea-9388-0242ac110004" are running Dec 16 11:26:43.723: INFO: Pod "my-hostname-basic-e9673bd6-1ff6-11ea-9388-0242ac110004-tvnjx" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-16 11:26:32 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-16 11:26:32 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-e9673bd6-1ff6-11ea-9388-0242ac110004]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-16 11:26:32 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-e9673bd6-1ff6-11ea-9388-0242ac110004]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-16 11:26:32 +0000 UTC Reason: Message:}]) Dec 16 11:26:43.723: INFO: Trying to dial the pod Dec 16 11:26:48.777: INFO: Controller my-hostname-basic-e9673bd6-1ff6-11ea-9388-0242ac110004: Got expected result from replica 1 [my-hostname-basic-e9673bd6-1ff6-11ea-9388-0242ac110004-tvnjx]: "my-hostname-basic-e9673bd6-1ff6-11ea-9388-0242ac110004-tvnjx", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:26:48.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-r2cfd" for this suite. Dec 16 11:26:56.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:26:56.933: INFO: namespace: e2e-tests-replication-controller-r2cfd, resource: bindings, ignored listing per whitelist Dec 16 11:26:57.039: INFO: namespace e2e-tests-replication-controller-r2cfd deletion completed in 8.253435403s • [SLOW TEST:24.680 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:26:57.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Dec 16 11:26:58.281: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:27:21.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-dh4ld" for this suite. Dec 16 11:27:46.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:27:46.054: INFO: namespace: e2e-tests-init-container-dh4ld, resource: bindings, ignored listing per whitelist Dec 16 11:27:46.217: INFO: namespace e2e-tests-init-container-dh4ld deletion completed in 24.259808624s • [SLOW TEST:49.178 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:27:46.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Dec 16 11:27:46.523: INFO: Waiting up to 5m0s for pod "pod-156b4783-1ff7-11ea-9388-0242ac110004" in namespace "e2e-tests-emptydir-2s568" to be "success or failure" Dec 16 11:27:46.552: INFO: Pod "pod-156b4783-1ff7-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 28.496684ms Dec 16 11:27:48.811: INFO: Pod "pod-156b4783-1ff7-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.286854183s Dec 16 11:27:50.828: INFO: Pod "pod-156b4783-1ff7-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.304737409s Dec 16 11:27:52.858: INFO: Pod "pod-156b4783-1ff7-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.334488523s Dec 16 11:27:54.890: INFO: Pod "pod-156b4783-1ff7-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.366045621s Dec 16 11:27:56.909: INFO: Pod "pod-156b4783-1ff7-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.384902122s STEP: Saw pod success Dec 16 11:27:56.909: INFO: Pod "pod-156b4783-1ff7-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 11:27:56.915: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-156b4783-1ff7-11ea-9388-0242ac110004 container test-container: STEP: delete the pod Dec 16 11:27:57.123: INFO: Waiting for pod pod-156b4783-1ff7-11ea-9388-0242ac110004 to disappear Dec 16 11:27:57.147: INFO: Pod pod-156b4783-1ff7-11ea-9388-0242ac110004 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:27:57.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-2s568" for this suite. Dec 16 11:28:03.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:28:03.600: INFO: namespace: e2e-tests-emptydir-2s568, resource: bindings, ignored listing per whitelist Dec 16 11:28:03.628: INFO: namespace e2e-tests-emptydir-2s568 deletion completed in 6.466014272s • [SLOW TEST:17.411 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:28:03.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 16 11:28:03.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' Dec 16 11:28:04.100: INFO: stderr: "" Dec 16 11:28:04.100: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-14T21:37:42Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" Dec 16 11:28:04.106: INFO: Not supported for server versions before "1.13.12" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:28:04.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-8mmfp" for this suite. Dec 16 11:28:10.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:28:10.414: INFO: namespace: e2e-tests-kubectl-8mmfp, resource: bindings, ignored listing per whitelist Dec 16 11:28:10.499: INFO: namespace e2e-tests-kubectl-8mmfp deletion completed in 6.379715006s S [SKIPPING] [6.870 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 16 11:28:04.106: Not supported for server versions before "1.13.12" /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:28:10.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-23f467ff-1ff7-11ea-9388-0242ac110004 STEP: Creating a pod to test consume configMaps Dec 16 11:28:10.965: INFO: Waiting up to 5m0s for pod "pod-configmaps-23ff6d21-1ff7-11ea-9388-0242ac110004" in namespace "e2e-tests-configmap-l2gvq" to be "success or failure" Dec 16 11:28:10.996: INFO: Pod "pod-configmaps-23ff6d21-1ff7-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 31.188279ms Dec 16 11:28:13.014: INFO: Pod "pod-configmaps-23ff6d21-1ff7-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04864181s Dec 16 11:28:15.035: INFO: Pod "pod-configmaps-23ff6d21-1ff7-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069655907s Dec 16 11:28:17.060: INFO: Pod "pod-configmaps-23ff6d21-1ff7-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094501077s Dec 16 11:28:19.073: INFO: Pod "pod-configmaps-23ff6d21-1ff7-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.107672762s Dec 16 11:28:21.098: INFO: Pod "pod-configmaps-23ff6d21-1ff7-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.133051158s STEP: Saw pod success Dec 16 11:28:21.099: INFO: Pod "pod-configmaps-23ff6d21-1ff7-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 11:28:21.150: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-23ff6d21-1ff7-11ea-9388-0242ac110004 container configmap-volume-test: STEP: delete the pod Dec 16 11:28:21.392: INFO: Waiting for pod pod-configmaps-23ff6d21-1ff7-11ea-9388-0242ac110004 to disappear Dec 16 11:28:21.419: INFO: Pod pod-configmaps-23ff6d21-1ff7-11ea-9388-0242ac110004 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:28:21.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-l2gvq" for this suite. Dec 16 11:28:29.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:28:29.736: INFO: namespace: e2e-tests-configmap-l2gvq, resource: bindings, ignored listing per whitelist Dec 16 11:28:29.776: INFO: namespace e2e-tests-configmap-l2gvq deletion completed in 8.276430328s • [SLOW TEST:19.277 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:28:29.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W1216 11:28:46.604520 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 16 11:28:46.604: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:28:46.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-tshzx" for this suite. Dec 16 11:29:13.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:29:14.811: INFO: namespace: e2e-tests-gc-tshzx, resource: bindings, ignored listing per whitelist Dec 16 11:29:15.039: INFO: namespace e2e-tests-gc-tshzx deletion completed in 28.405057952s • [SLOW TEST:45.262 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:29:15.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command Dec 16 11:29:17.880: INFO: Waiting up to 5m0s for pod "client-containers-4b9cb7f7-1ff7-11ea-9388-0242ac110004" in namespace "e2e-tests-containers-fsfjs" to be "success or failure" Dec 16 11:29:18.532: INFO: Pod "client-containers-4b9cb7f7-1ff7-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 651.619998ms Dec 16 11:29:20.991: INFO: Pod "client-containers-4b9cb7f7-1ff7-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 3.109956252s Dec 16 11:29:23.319: INFO: Pod "client-containers-4b9cb7f7-1ff7-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 5.43870863s Dec 16 11:29:25.340: INFO: Pod "client-containers-4b9cb7f7-1ff7-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.459641558s Dec 16 11:29:27.359: INFO: Pod "client-containers-4b9cb7f7-1ff7-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.477979404s Dec 16 11:29:29.989: INFO: Pod "client-containers-4b9cb7f7-1ff7-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 12.10883385s Dec 16 11:29:32.045: INFO: Pod "client-containers-4b9cb7f7-1ff7-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 14.164471601s Dec 16 11:29:34.123: INFO: Pod "client-containers-4b9cb7f7-1ff7-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 16.242826215s Dec 16 11:29:36.152: INFO: Pod "client-containers-4b9cb7f7-1ff7-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.27168072s STEP: Saw pod success Dec 16 11:29:36.153: INFO: Pod "client-containers-4b9cb7f7-1ff7-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 11:29:36.187: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-4b9cb7f7-1ff7-11ea-9388-0242ac110004 container test-container: STEP: delete the pod Dec 16 11:29:36.836: INFO: Waiting for pod client-containers-4b9cb7f7-1ff7-11ea-9388-0242ac110004 to disappear Dec 16 11:29:36.861: INFO: Pod client-containers-4b9cb7f7-1ff7-11ea-9388-0242ac110004 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:29:36.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-fsfjs" for this suite. Dec 16 11:29:42.916: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:29:43.190: INFO: namespace: e2e-tests-containers-fsfjs, resource: bindings, ignored listing per whitelist Dec 16 11:29:43.243: INFO: namespace e2e-tests-containers-fsfjs deletion completed in 6.372807834s • [SLOW TEST:28.203 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:29:43.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 16 11:29:43.438: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5b16180c-1ff7-11ea-9388-0242ac110004" in namespace "e2e-tests-downward-api-mf4kf" to be "success or failure" Dec 16 11:29:43.454: INFO: Pod "downwardapi-volume-5b16180c-1ff7-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 16.481705ms Dec 16 11:29:45.472: INFO: Pod "downwardapi-volume-5b16180c-1ff7-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033810915s Dec 16 11:29:47.539: INFO: Pod "downwardapi-volume-5b16180c-1ff7-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101080402s Dec 16 11:29:50.047: INFO: Pod "downwardapi-volume-5b16180c-1ff7-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.609135487s Dec 16 11:29:52.063: INFO: Pod "downwardapi-volume-5b16180c-1ff7-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.624906232s Dec 16 11:29:54.080: INFO: Pod "downwardapi-volume-5b16180c-1ff7-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.642234606s STEP: Saw pod success Dec 16 11:29:54.080: INFO: Pod "downwardapi-volume-5b16180c-1ff7-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 11:29:54.084: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-5b16180c-1ff7-11ea-9388-0242ac110004 container client-container: STEP: delete the pod Dec 16 11:29:55.574: INFO: Waiting for pod downwardapi-volume-5b16180c-1ff7-11ea-9388-0242ac110004 to disappear Dec 16 11:29:55.758: INFO: Pod downwardapi-volume-5b16180c-1ff7-11ea-9388-0242ac110004 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:29:55.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-mf4kf" for this suite. Dec 16 11:30:01.870: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:30:02.186: INFO: namespace: e2e-tests-downward-api-mf4kf, resource: bindings, ignored listing per whitelist Dec 16 11:30:02.233: INFO: namespace e2e-tests-downward-api-mf4kf deletion completed in 6.449976629s • [SLOW TEST:18.989 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:30:02.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 16 11:30:02.614: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 Dec 16 11:30:02.686: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-2wm2f/daemonsets","resourceVersion":"15005106"},"items":null} Dec 16 11:30:02.706: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-2wm2f/pods","resourceVersion":"15005106"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:30:02.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-2wm2f" for this suite. Dec 16 11:30:08.778: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:30:08.878: INFO: namespace: e2e-tests-daemonsets-2wm2f, resource: bindings, ignored listing per whitelist Dec 16 11:30:08.920: INFO: namespace e2e-tests-daemonsets-2wm2f deletion completed in 6.189460524s S [SKIPPING] [6.685 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 16 11:30:02.614: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:30:08.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-6a62fbc8-1ff7-11ea-9388-0242ac110004 STEP: Creating a pod to test consume configMaps Dec 16 11:30:09.074: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6a63c165-1ff7-11ea-9388-0242ac110004" in namespace "e2e-tests-projected-l6l7t" to be "success or failure" Dec 16 11:30:09.138: INFO: Pod "pod-projected-configmaps-6a63c165-1ff7-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 63.637824ms Dec 16 11:30:11.169: INFO: Pod "pod-projected-configmaps-6a63c165-1ff7-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095514397s Dec 16 11:30:13.188: INFO: Pod "pod-projected-configmaps-6a63c165-1ff7-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.1139477s Dec 16 11:30:16.063: INFO: Pod "pod-projected-configmaps-6a63c165-1ff7-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.989215704s Dec 16 11:30:18.098: INFO: Pod "pod-projected-configmaps-6a63c165-1ff7-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.023580105s Dec 16 11:30:20.116: INFO: Pod "pod-projected-configmaps-6a63c165-1ff7-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.041767203s Dec 16 11:30:22.145: INFO: Pod "pod-projected-configmaps-6a63c165-1ff7-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.071459153s STEP: Saw pod success Dec 16 11:30:22.146: INFO: Pod "pod-projected-configmaps-6a63c165-1ff7-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 11:30:22.187: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-6a63c165-1ff7-11ea-9388-0242ac110004 container projected-configmap-volume-test: STEP: delete the pod Dec 16 11:30:22.466: INFO: Waiting for pod pod-projected-configmaps-6a63c165-1ff7-11ea-9388-0242ac110004 to disappear Dec 16 11:30:22.482: INFO: Pod pod-projected-configmaps-6a63c165-1ff7-11ea-9388-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:30:22.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-l6l7t" for this suite. Dec 16 11:30:28.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:30:28.706: INFO: namespace: e2e-tests-projected-l6l7t, resource: bindings, ignored listing per whitelist Dec 16 11:30:28.865: INFO: namespace e2e-tests-projected-l6l7t deletion completed in 6.371596627s • [SLOW TEST:19.944 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:30:28.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Dec 16 11:30:41.904: INFO: Successfully updated pod "labelsupdate764dacc3-1ff7-11ea-9388-0242ac110004" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:30:43.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-dn7k2" for this suite. Dec 16 11:31:08.211: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:31:08.381: INFO: namespace: e2e-tests-projected-dn7k2, resource: bindings, ignored listing per whitelist Dec 16 11:31:08.458: INFO: namespace e2e-tests-projected-dn7k2 deletion completed in 24.452935624s • [SLOW TEST:39.593 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:31:08.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token Dec 16 11:31:09.443: INFO: created pod pod-service-account-defaultsa Dec 16 11:31:09.443: INFO: pod pod-service-account-defaultsa service account token volume mount: true Dec 16 11:31:09.467: INFO: created pod pod-service-account-mountsa Dec 16 11:31:09.467: INFO: pod pod-service-account-mountsa service account token volume mount: true Dec 16 11:31:09.585: INFO: created pod pod-service-account-nomountsa Dec 16 11:31:09.586: INFO: pod pod-service-account-nomountsa service account token volume mount: false Dec 16 11:31:09.606: INFO: created pod pod-service-account-defaultsa-mountspec Dec 16 11:31:09.606: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Dec 16 11:31:09.648: INFO: created pod pod-service-account-mountsa-mountspec Dec 16 11:31:09.649: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Dec 16 11:31:09.825: INFO: created pod pod-service-account-nomountsa-mountspec Dec 16 11:31:09.825: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Dec 16 11:31:09.854: INFO: created pod pod-service-account-defaultsa-nomountspec Dec 16 11:31:09.854: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Dec 16 11:31:09.962: INFO: created pod pod-service-account-mountsa-nomountspec Dec 16 11:31:09.962: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Dec 16 11:31:10.041: INFO: created pod pod-service-account-nomountsa-nomountspec Dec 16 11:31:10.041: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:31:10.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-npkxp" for this suite. Dec 16 11:31:56.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:31:56.372: INFO: namespace: e2e-tests-svcaccounts-npkxp, resource: bindings, ignored listing per whitelist Dec 16 11:31:56.694: INFO: namespace e2e-tests-svcaccounts-npkxp deletion completed in 46.617821657s • [SLOW TEST:48.234 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:31:56.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-76z9q.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-76z9q.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-76z9q.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-76z9q.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-76z9q.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-76z9q.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 16 11:32:13.585: INFO: Unable to read wheezy_udp@kubernetes.default from pod e2e-tests-dns-76z9q/dns-test-aadeaa9f-1ff7-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-aadeaa9f-1ff7-11ea-9388-0242ac110004) Dec 16 11:32:13.611: INFO: Unable to read wheezy_tcp@kubernetes.default from pod e2e-tests-dns-76z9q/dns-test-aadeaa9f-1ff7-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-aadeaa9f-1ff7-11ea-9388-0242ac110004) Dec 16 11:32:13.630: INFO: Unable to read wheezy_udp@kubernetes.default.svc from pod e2e-tests-dns-76z9q/dns-test-aadeaa9f-1ff7-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-aadeaa9f-1ff7-11ea-9388-0242ac110004) Dec 16 11:32:13.648: INFO: Unable to read wheezy_tcp@kubernetes.default.svc from pod e2e-tests-dns-76z9q/dns-test-aadeaa9f-1ff7-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-aadeaa9f-1ff7-11ea-9388-0242ac110004) Dec 16 11:32:13.659: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-76z9q/dns-test-aadeaa9f-1ff7-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-aadeaa9f-1ff7-11ea-9388-0242ac110004) Dec 16 11:32:13.666: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-76z9q/dns-test-aadeaa9f-1ff7-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-aadeaa9f-1ff7-11ea-9388-0242ac110004) Dec 16 11:32:13.672: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-76z9q.svc.cluster.local from pod e2e-tests-dns-76z9q/dns-test-aadeaa9f-1ff7-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-aadeaa9f-1ff7-11ea-9388-0242ac110004) Dec 16 11:32:13.680: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod e2e-tests-dns-76z9q/dns-test-aadeaa9f-1ff7-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-aadeaa9f-1ff7-11ea-9388-0242ac110004) Dec 16 11:32:13.686: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-76z9q/dns-test-aadeaa9f-1ff7-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-aadeaa9f-1ff7-11ea-9388-0242ac110004) Dec 16 11:32:13.694: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-76z9q/dns-test-aadeaa9f-1ff7-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-aadeaa9f-1ff7-11ea-9388-0242ac110004) Dec 16 11:32:13.704: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-76z9q/dns-test-aadeaa9f-1ff7-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-aadeaa9f-1ff7-11ea-9388-0242ac110004) Dec 16 11:32:13.709: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-76z9q/dns-test-aadeaa9f-1ff7-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-aadeaa9f-1ff7-11ea-9388-0242ac110004) Dec 16 11:32:13.714: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-76z9q/dns-test-aadeaa9f-1ff7-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-aadeaa9f-1ff7-11ea-9388-0242ac110004) Dec 16 11:32:13.720: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-76z9q/dns-test-aadeaa9f-1ff7-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-aadeaa9f-1ff7-11ea-9388-0242ac110004) Dec 16 11:32:13.725: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-76z9q/dns-test-aadeaa9f-1ff7-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-aadeaa9f-1ff7-11ea-9388-0242ac110004) Dec 16 11:32:13.729: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-76z9q/dns-test-aadeaa9f-1ff7-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-aadeaa9f-1ff7-11ea-9388-0242ac110004) Dec 16 11:32:13.736: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-76z9q.svc.cluster.local from pod e2e-tests-dns-76z9q/dns-test-aadeaa9f-1ff7-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-aadeaa9f-1ff7-11ea-9388-0242ac110004) Dec 16 11:32:13.741: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-76z9q/dns-test-aadeaa9f-1ff7-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-aadeaa9f-1ff7-11ea-9388-0242ac110004) Dec 16 11:32:13.756: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-76z9q/dns-test-aadeaa9f-1ff7-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-aadeaa9f-1ff7-11ea-9388-0242ac110004) Dec 16 11:32:13.768: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-76z9q/dns-test-aadeaa9f-1ff7-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-aadeaa9f-1ff7-11ea-9388-0242ac110004) Dec 16 11:32:13.768: INFO: Lookups using e2e-tests-dns-76z9q/dns-test-aadeaa9f-1ff7-11ea-9388-0242ac110004 failed for: [wheezy_udp@kubernetes.default wheezy_tcp@kubernetes.default wheezy_udp@kubernetes.default.svc wheezy_tcp@kubernetes.default.svc wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-76z9q.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-76z9q.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord] Dec 16 11:32:18.933: INFO: DNS probes using e2e-tests-dns-76z9q/dns-test-aadeaa9f-1ff7-11ea-9388-0242ac110004 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:32:19.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-76z9q" for this suite. Dec 16 11:32:27.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:32:27.299: INFO: namespace: e2e-tests-dns-76z9q, resource: bindings, ignored listing per whitelist Dec 16 11:32:27.329: INFO: namespace e2e-tests-dns-76z9q deletion completed in 8.299301443s • [SLOW TEST:30.634 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:32:27.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 16 11:32:27.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-pnzgw' Dec 16 11:32:27.889: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 16 11:32:27.890: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Dec 16 11:32:30.300: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-qgqhl] Dec 16 11:32:30.300: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-qgqhl" in namespace "e2e-tests-kubectl-pnzgw" to be "running and ready" Dec 16 11:32:30.308: INFO: Pod "e2e-test-nginx-rc-qgqhl": Phase="Pending", Reason="", readiness=false. Elapsed: 7.512149ms Dec 16 11:32:32.330: INFO: Pod "e2e-test-nginx-rc-qgqhl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030053332s Dec 16 11:32:34.395: INFO: Pod "e2e-test-nginx-rc-qgqhl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094965711s Dec 16 11:32:36.408: INFO: Pod "e2e-test-nginx-rc-qgqhl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108039639s Dec 16 11:32:38.425: INFO: Pod "e2e-test-nginx-rc-qgqhl": Phase="Running", Reason="", readiness=true. Elapsed: 8.125069197s Dec 16 11:32:38.426: INFO: Pod "e2e-test-nginx-rc-qgqhl" satisfied condition "running and ready" Dec 16 11:32:38.426: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-qgqhl] Dec 16 11:32:38.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-pnzgw' Dec 16 11:32:38.679: INFO: stderr: "" Dec 16 11:32:38.680: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 Dec 16 11:32:38.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-pnzgw' Dec 16 11:32:38.971: INFO: stderr: "" Dec 16 11:32:38.971: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:32:38.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-pnzgw" for this suite. Dec 16 11:33:01.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:33:01.093: INFO: namespace: e2e-tests-kubectl-pnzgw, resource: bindings, ignored listing per whitelist Dec 16 11:33:01.225: INFO: namespace e2e-tests-kubectl-pnzgw deletion completed in 22.241899376s • [SLOW TEST:33.896 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:33:01.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-d12e12d0-1ff7-11ea-9388-0242ac110004 STEP: Creating a pod to test consume configMaps Dec 16 11:33:01.630: INFO: Waiting up to 5m0s for pod "pod-configmaps-d13d6341-1ff7-11ea-9388-0242ac110004" in namespace "e2e-tests-configmap-8rwdb" to be "success or failure" Dec 16 11:33:01.649: INFO: Pod "pod-configmaps-d13d6341-1ff7-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 18.296862ms Dec 16 11:33:03.715: INFO: Pod "pod-configmaps-d13d6341-1ff7-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084082212s Dec 16 11:33:05.734: INFO: Pod "pod-configmaps-d13d6341-1ff7-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103276963s Dec 16 11:33:08.152: INFO: Pod "pod-configmaps-d13d6341-1ff7-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.521754714s Dec 16 11:33:10.220: INFO: Pod "pod-configmaps-d13d6341-1ff7-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.589361918s Dec 16 11:33:12.242: INFO: Pod "pod-configmaps-d13d6341-1ff7-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.611391503s STEP: Saw pod success Dec 16 11:33:12.242: INFO: Pod "pod-configmaps-d13d6341-1ff7-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 11:33:12.249: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-d13d6341-1ff7-11ea-9388-0242ac110004 container configmap-volume-test: STEP: delete the pod Dec 16 11:33:13.526: INFO: Waiting for pod pod-configmaps-d13d6341-1ff7-11ea-9388-0242ac110004 to disappear Dec 16 11:33:13.560: INFO: Pod pod-configmaps-d13d6341-1ff7-11ea-9388-0242ac110004 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:33:13.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-8rwdb" for this suite. Dec 16 11:33:19.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:33:19.921: INFO: namespace: e2e-tests-configmap-8rwdb, resource: bindings, ignored listing per whitelist Dec 16 11:33:19.941: INFO: namespace e2e-tests-configmap-8rwdb deletion completed in 6.369197134s • [SLOW TEST:18.715 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:33:19.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-x6d2r STEP: creating a selector STEP: Creating the service pods in kubernetes Dec 16 11:33:20.149: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Dec 16 11:34:00.597: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-x6d2r PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 16 11:34:00.597: INFO: >>> kubeConfig: /root/.kube/config Dec 16 11:34:02.123: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:34:02.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-x6d2r" for this suite. Dec 16 11:34:28.268: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:34:28.410: INFO: namespace: e2e-tests-pod-network-test-x6d2r, resource: bindings, ignored listing per whitelist Dec 16 11:34:28.503: INFO: namespace e2e-tests-pod-network-test-x6d2r deletion completed in 26.281970384s • [SLOW TEST:68.562 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:34:28.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments Dec 16 11:34:28.795: INFO: Waiting up to 5m0s for pod "client-containers-05338b84-1ff8-11ea-9388-0242ac110004" in namespace "e2e-tests-containers-bf5b8" to be "success or failure" Dec 16 11:34:28.949: INFO: Pod "client-containers-05338b84-1ff8-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 153.065325ms Dec 16 11:34:31.887: INFO: Pod "client-containers-05338b84-1ff8-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 3.091423277s Dec 16 11:34:33.923: INFO: Pod "client-containers-05338b84-1ff8-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 5.127528971s Dec 16 11:34:36.012: INFO: Pod "client-containers-05338b84-1ff8-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.216166381s Dec 16 11:34:38.663: INFO: Pod "client-containers-05338b84-1ff8-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.86754505s Dec 16 11:34:40.684: INFO: Pod "client-containers-05338b84-1ff8-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.888569626s STEP: Saw pod success Dec 16 11:34:40.685: INFO: Pod "client-containers-05338b84-1ff8-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 11:34:40.696: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-05338b84-1ff8-11ea-9388-0242ac110004 container test-container: STEP: delete the pod Dec 16 11:34:42.929: INFO: Waiting for pod client-containers-05338b84-1ff8-11ea-9388-0242ac110004 to disappear Dec 16 11:34:42.950: INFO: Pod client-containers-05338b84-1ff8-11ea-9388-0242ac110004 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:34:42.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-bf5b8" for this suite. Dec 16 11:34:49.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:34:49.220: INFO: namespace: e2e-tests-containers-bf5b8, resource: bindings, ignored listing per whitelist Dec 16 11:34:49.331: INFO: namespace e2e-tests-containers-bf5b8 deletion completed in 6.363174472s • [SLOW TEST:20.827 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:34:49.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 16 11:34:49.563: INFO: Waiting up to 5m0s for pod "downwardapi-volume-119318b9-1ff8-11ea-9388-0242ac110004" in namespace "e2e-tests-downward-api-hk5ch" to be "success or failure" Dec 16 11:34:49.595: INFO: Pod "downwardapi-volume-119318b9-1ff8-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 31.740383ms Dec 16 11:34:53.393: INFO: Pod "downwardapi-volume-119318b9-1ff8-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 3.828937348s Dec 16 11:34:55.459: INFO: Pod "downwardapi-volume-119318b9-1ff8-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 5.89573956s Dec 16 11:34:57.937: INFO: Pod "downwardapi-volume-119318b9-1ff8-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.37299389s Dec 16 11:35:00.000: INFO: Pod "downwardapi-volume-119318b9-1ff8-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.436629218s Dec 16 11:35:02.080: INFO: Pod "downwardapi-volume-119318b9-1ff8-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.516261051s STEP: Saw pod success Dec 16 11:35:02.080: INFO: Pod "downwardapi-volume-119318b9-1ff8-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 11:35:02.104: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-119318b9-1ff8-11ea-9388-0242ac110004 container client-container: STEP: delete the pod Dec 16 11:35:02.376: INFO: Waiting for pod downwardapi-volume-119318b9-1ff8-11ea-9388-0242ac110004 to disappear Dec 16 11:35:02.415: INFO: Pod downwardapi-volume-119318b9-1ff8-11ea-9388-0242ac110004 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:35:02.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-hk5ch" for this suite. Dec 16 11:35:08.599: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:35:08.713: INFO: namespace: e2e-tests-downward-api-hk5ch, resource: bindings, ignored listing per whitelist Dec 16 11:35:08.748: INFO: namespace e2e-tests-downward-api-hk5ch deletion completed in 6.309732748s • [SLOW TEST:19.416 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:35:08.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 16 11:35:09.041: INFO: Creating deployment "nginx-deployment" Dec 16 11:35:09.062: INFO: Waiting for observed generation 1 Dec 16 11:35:11.548: INFO: Waiting for all required pods to come up Dec 16 11:35:11.570: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Dec 16 11:35:57.715: INFO: Waiting for deployment "nginx-deployment" to complete Dec 16 11:35:57.737: INFO: Updating deployment "nginx-deployment" with a non-existent image Dec 16 11:35:57.755: INFO: Updating deployment nginx-deployment Dec 16 11:35:57.755: INFO: Waiting for observed generation 2 Dec 16 11:36:01.081: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Dec 16 11:36:01.112: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Dec 16 11:36:02.600: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Dec 16 11:36:03.044: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Dec 16 11:36:03.044: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Dec 16 11:36:03.053: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Dec 16 11:36:03.061: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Dec 16 11:36:03.061: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Dec 16 11:36:04.014: INFO: Updating deployment nginx-deployment Dec 16 11:36:04.015: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Dec 16 11:36:06.048: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Dec 16 11:36:06.880: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Dec 16 11:36:09.432: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-vlm26,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-vlm26/deployments/nginx-deployment,UID:1d33f324-1ff8-11ea-a994-fa163e34d433,ResourceVersion:15006040,Generation:3,CreationTimestamp:2019-12-16 11:35:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[{Progressing True 2019-12-16 11:35:58 +0000 UTC 2019-12-16 11:35:09 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2019-12-16 11:36:08 +0000 UTC 2019-12-16 11:36:08 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Dec 16 11:36:09.870: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-vlm26,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-vlm26/replicasets/nginx-deployment-5c98f8fb5,UID:3a3d3056-1ff8-11ea-a994-fa163e34d433,ResourceVersion:15006039,Generation:3,CreationTimestamp:2019-12-16 11:35:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 1d33f324-1ff8-11ea-a994-fa163e34d433 0xc0009a2797 0xc0009a2798}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 16 11:36:09.870: INFO: All old ReplicaSets of Deployment "nginx-deployment": Dec 16 11:36:09.871: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-vlm26,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-vlm26/replicasets/nginx-deployment-85ddf47c5d,UID:1d38227e-1ff8-11ea-a994-fa163e34d433,ResourceVersion:15006034,Generation:3,CreationTimestamp:2019-12-16 11:35:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 1d33f324-1ff8-11ea-a994-fa163e34d433 0xc0009a2857 0xc0009a2858}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Dec 16 11:36:10.801: INFO: Pod "nginx-deployment-5c98f8fb5-6nj7p" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-6nj7p,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-vlm26,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vlm26/pods/nginx-deployment-5c98f8fb5-6nj7p,UID:3a4ab3d0-1ff8-11ea-a994-fa163e34d433,ResourceVersion:15006023,Generation:0,CreationTimestamp:2019-12-16 11:35:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 3a3d3056-1ff8-11ea-a994-fa163e34d433 0xc0009a36b7 0xc0009a36b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6vf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6vf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dt6vf true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0009a3810} {node.kubernetes.io/unreachable Exists NoExecute 0xc0009a3830}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:35:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:35:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:35:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:35:57 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-16 11:35:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 16 11:36:10.802: INFO: Pod "nginx-deployment-5c98f8fb5-cjsq7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-cjsq7,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-vlm26,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vlm26/pods/nginx-deployment-5c98f8fb5-cjsq7,UID:4134a4a7-1ff8-11ea-a994-fa163e34d433,ResourceVersion:15006053,Generation:0,CreationTimestamp:2019-12-16 11:36:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 3a3d3056-1ff8-11ea-a994-fa163e34d433 0xc0009a3ac7 0xc0009a3ac8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6vf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6vf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dt6vf true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0009a3cf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0009a3d10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:36:09 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 16 11:36:10.802: INFO: Pod "nginx-deployment-5c98f8fb5-gnjmv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-gnjmv,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-vlm26,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vlm26/pods/nginx-deployment-5c98f8fb5-gnjmv,UID:3aa9ec05-1ff8-11ea-a994-fa163e34d433,ResourceVersion:15006030,Generation:0,CreationTimestamp:2019-12-16 11:35:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 3a3d3056-1ff8-11ea-a994-fa163e34d433 0xc0009a3ec7 0xc0009a3ec8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6vf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6vf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dt6vf true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001272060} {node.kubernetes.io/unreachable Exists NoExecute 0xc001272080}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:36:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:36:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:36:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:35:58 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-16 11:36:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 16 11:36:10.803: INFO: Pod "nginx-deployment-5c98f8fb5-hlxf5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-hlxf5,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-vlm26,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vlm26/pods/nginx-deployment-5c98f8fb5-hlxf5,UID:3a537c65-1ff8-11ea-a994-fa163e34d433,ResourceVersion:15006028,Generation:0,CreationTimestamp:2019-12-16 11:35:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 3a3d3056-1ff8-11ea-a994-fa163e34d433 0xc001272147 0xc001272148}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6vf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6vf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dt6vf true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001272720} {node.kubernetes.io/unreachable Exists NoExecute 0xc001272740}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:35:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:35:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:35:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:35:58 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-16 11:35:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 16 11:36:10.803: INFO: Pod "nginx-deployment-5c98f8fb5-hxgbj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-hxgbj,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-vlm26,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vlm26/pods/nginx-deployment-5c98f8fb5-hxgbj,UID:3a53032e-1ff8-11ea-a994-fa163e34d433,ResourceVersion:15006027,Generation:0,CreationTimestamp:2019-12-16 11:35:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 3a3d3056-1ff8-11ea-a994-fa163e34d433 0xc001272817 0xc001272818}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6vf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6vf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dt6vf true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001272880} {node.kubernetes.io/unreachable Exists NoExecute 0xc0012728a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:35:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:35:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:35:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:35:57 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-16 11:35:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 16 11:36:10.804: INFO: Pod "nginx-deployment-5c98f8fb5-mxp6z" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-mxp6z,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-vlm26,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vlm26/pods/nginx-deployment-5c98f8fb5-mxp6z,UID:417ce03a-1ff8-11ea-a994-fa163e34d433,ResourceVersion:15006054,Generation:0,CreationTimestamp:2019-12-16 11:36:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 3a3d3056-1ff8-11ea-a994-fa163e34d433 0xc0012729c7 0xc0012729c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6vf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6vf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dt6vf true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001272a40} {node.kubernetes.io/unreachable Exists NoExecute 0xc001272a60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 16 11:36:10.804: INFO: Pod "nginx-deployment-5c98f8fb5-n4zl7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-n4zl7,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-vlm26,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vlm26/pods/nginx-deployment-5c98f8fb5-n4zl7,UID:417d7fa7-1ff8-11ea-a994-fa163e34d433,ResourceVersion:15006059,Generation:0,CreationTimestamp:2019-12-16 11:36:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 3a3d3056-1ff8-11ea-a994-fa163e34d433 0xc001272b00 0xc001272b01}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6vf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6vf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dt6vf true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001272b70} {node.kubernetes.io/unreachable Exists NoExecute 0xc001272b90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 16 11:36:10.805: INFO: Pod "nginx-deployment-5c98f8fb5-p9ql7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-p9ql7,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-vlm26,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vlm26/pods/nginx-deployment-5c98f8fb5-p9ql7,UID:3aac9022-1ff8-11ea-a994-fa163e34d433,ResourceVersion:15006036,Generation:0,CreationTimestamp:2019-12-16 11:35:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 3a3d3056-1ff8-11ea-a994-fa163e34d433 0xc001272bf0 0xc001272bf1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6vf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6vf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dt6vf true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001272c60} {node.kubernetes.io/unreachable Exists NoExecute 0xc001272c80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:36:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:36:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:36:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:35:58 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-16 11:36:02 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 16 11:36:10.805: INFO: Pod "nginx-deployment-85ddf47c5d-4vphd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-4vphd,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-vlm26,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vlm26/pods/nginx-deployment-85ddf47c5d-4vphd,UID:412fc9a6-1ff8-11ea-a994-fa163e34d433,ResourceVersion:15006049,Generation:0,CreationTimestamp:2019-12-16 11:36:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1d38227e-1ff8-11ea-a994-fa163e34d433 0xc001272f57 0xc001272f58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6vf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6vf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dt6vf true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001272fc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001272fe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:36:09 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 16 11:36:10.806: INFO: Pod "nginx-deployment-85ddf47c5d-57mcc" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-57mcc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-vlm26,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vlm26/pods/nginx-deployment-85ddf47c5d-57mcc,UID:1d4ab8e2-1ff8-11ea-a994-fa163e34d433,ResourceVersion:15005951,Generation:0,CreationTimestamp:2019-12-16 11:35:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1d38227e-1ff8-11ea-a994-fa163e34d433 0xc001273067 0xc001273068}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6vf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6vf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dt6vf true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001273230} {node.kubernetes.io/unreachable Exists NoExecute 0xc001273250}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:35:09 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:35:52 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:35:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:35:09 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.8,StartTime:2019-12-16 11:35:09 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-16 11:35:51 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://4cdf9377d5aa2cb6444e9d734f3765ab1f98a67e45e2d12d8c9dbeea1e2ea54e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 16 11:36:10.806: INFO: Pod "nginx-deployment-85ddf47c5d-6j5ww" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6j5ww,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-vlm26,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vlm26/pods/nginx-deployment-85ddf47c5d-6j5ww,UID:1d55408b-1ff8-11ea-a994-fa163e34d433,ResourceVersion:15005956,Generation:0,CreationTimestamp:2019-12-16 11:35:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1d38227e-1ff8-11ea-a994-fa163e34d433 0xc001273367 0xc001273368}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6vf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6vf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dt6vf true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0012734f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001273510}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:35:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:35:52 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:35:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:35:09 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2019-12-16 11:35:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-16 11:35:49 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://425768d9f446f9b3543a8c7e083be013c3a9f001e5685a8308667a22b902902a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 16 11:36:10.807: INFO: Pod "nginx-deployment-85ddf47c5d-767mm" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-767mm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-vlm26,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vlm26/pods/nginx-deployment-85ddf47c5d-767mm,UID:1d544548-1ff8-11ea-a994-fa163e34d433,ResourceVersion:15005945,Generation:0,CreationTimestamp:2019-12-16 11:35:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1d38227e-1ff8-11ea-a994-fa163e34d433 0xc0012735d7 0xc0012735d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6vf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6vf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dt6vf true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0012736c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001273700}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:35:09 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:35:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:35:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:35:09 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2019-12-16 11:35:09 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-16 11:35:49 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://b24b234cafa4ab20570b6310e247a08bdf2d9f117dfac84a937422a2f41fe3d4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 16 11:36:10.807: INFO: Pod "nginx-deployment-85ddf47c5d-9wvk5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9wvk5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-vlm26,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vlm26/pods/nginx-deployment-85ddf47c5d-9wvk5,UID:417d5b95-1ff8-11ea-a994-fa163e34d433,ResourceVersion:15006056,Generation:0,CreationTimestamp:2019-12-16 11:36:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1d38227e-1ff8-11ea-a994-fa163e34d433 0xc0012737c7 0xc0012737c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6vf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6vf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dt6vf true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0012738a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0012738c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 16 11:36:10.808: INFO: Pod "nginx-deployment-85ddf47c5d-f4hw6" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-f4hw6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-vlm26,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vlm26/pods/nginx-deployment-85ddf47c5d-f4hw6,UID:1d54dcfa-1ff8-11ea-a994-fa163e34d433,ResourceVersion:15005960,Generation:0,CreationTimestamp:2019-12-16 11:35:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1d38227e-1ff8-11ea-a994-fa163e34d433 0xc0012739d0 0xc0012739d1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6vf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6vf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dt6vf true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001273a30} {node.kubernetes.io/unreachable Exists NoExecute 0xc001273a50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:35:09 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:35:52 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:35:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:35:09 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.9,StartTime:2019-12-16 11:35:09 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-16 11:35:51 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://9368b159550a04152938d8db6e974ee3fc9ea4366352f3f91bd20afd4d231636}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 16 11:36:10.809: INFO: Pod "nginx-deployment-85ddf47c5d-fkcgt" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fkcgt,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-vlm26,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vlm26/pods/nginx-deployment-85ddf47c5d-fkcgt,UID:1d85f76d-1ff8-11ea-a994-fa163e34d433,ResourceVersion:15005965,Generation:0,CreationTimestamp:2019-12-16 11:35:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1d38227e-1ff8-11ea-a994-fa163e34d433 0xc001273b17 0xc001273b18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6vf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6vf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dt6vf true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001273b80} {node.kubernetes.io/unreachable Exists NoExecute 0xc001273ba0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:35:13 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:35:52 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:35:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:35:09 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.11,StartTime:2019-12-16 11:35:13 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-16 11:35:51 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://e44a3b3fada3b8bc2911fdbf18e8787993ae7c7e9846729da58cd9b2bdd5ef69}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 16 11:36:10.809: INFO: Pod "nginx-deployment-85ddf47c5d-kjq4s" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-kjq4s,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-vlm26,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vlm26/pods/nginx-deployment-85ddf47c5d-kjq4s,UID:1d85aa08-1ff8-11ea-a994-fa163e34d433,ResourceVersion:15005942,Generation:0,CreationTimestamp:2019-12-16 11:35:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1d38227e-1ff8-11ea-a994-fa163e34d433 0xc001273cc7 0xc001273cc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6vf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6vf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dt6vf true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001273db0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001273dd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:35:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:35:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:35:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:35:09 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2019-12-16 11:35:12 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-16 11:35:50 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://4ec1423da01dfa40a18f3452f0668870a92c85a732860c9c46e711ee7b681185}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 16 11:36:10.810: INFO: Pod "nginx-deployment-85ddf47c5d-kxnjv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-kxnjv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-vlm26,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vlm26/pods/nginx-deployment-85ddf47c5d-kxnjv,UID:41381cb8-1ff8-11ea-a994-fa163e34d433,ResourceVersion:15006061,Generation:0,CreationTimestamp:2019-12-16 11:36:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1d38227e-1ff8-11ea-a994-fa163e34d433 0xc001273e97 0xc001273e98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6vf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6vf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dt6vf true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001273fb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001273fd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:36:09 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 16 11:36:10.810: INFO: Pod "nginx-deployment-85ddf47c5d-lstpq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lstpq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-vlm26,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vlm26/pods/nginx-deployment-85ddf47c5d-lstpq,UID:417d7b0e-1ff8-11ea-a994-fa163e34d433,ResourceVersion:15006058,Generation:0,CreationTimestamp:2019-12-16 11:36:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1d38227e-1ff8-11ea-a994-fa163e34d433 0xc00119c147 0xc00119c148}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6vf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6vf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dt6vf true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00119c1b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00119c1d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 16 11:36:10.811: INFO: Pod "nginx-deployment-85ddf47c5d-m2rwx" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-m2rwx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-vlm26,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vlm26/pods/nginx-deployment-85ddf47c5d-m2rwx,UID:1d49f871-1ff8-11ea-a994-fa163e34d433,ResourceVersion:15005930,Generation:0,CreationTimestamp:2019-12-16 11:35:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1d38227e-1ff8-11ea-a994-fa163e34d433 0xc00119c230 0xc00119c231}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6vf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6vf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dt6vf true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00119c290} {node.kubernetes.io/unreachable Exists NoExecute 0xc00119c2b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:35:09 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:35:50 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:35:50 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:35:09 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2019-12-16 11:35:09 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-16 11:35:46 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://10592f4ec39e34040a27f9463fff8900e4f39b7a9788d6aae85c23ce173d496f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 16 11:36:10.812: INFO: Pod "nginx-deployment-85ddf47c5d-n4vll" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-n4vll,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-vlm26,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vlm26/pods/nginx-deployment-85ddf47c5d-n4vll,UID:1d4781aa-1ff8-11ea-a994-fa163e34d433,ResourceVersion:15005948,Generation:0,CreationTimestamp:2019-12-16 11:35:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1d38227e-1ff8-11ea-a994-fa163e34d433 0xc00119c377 0xc00119c378}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6vf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6vf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dt6vf true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00119c3e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00119c400}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:35:09 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:35:52 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:35:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:35:09 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.12,StartTime:2019-12-16 11:35:09 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-16 11:35:51 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://6e53db0b520a5404f57e0cb327981153d00d58ef521b440493e1fe51907f831e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 16 11:36:10.812: INFO: Pod "nginx-deployment-85ddf47c5d-nx9v5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nx9v5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-vlm26,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vlm26/pods/nginx-deployment-85ddf47c5d-nx9v5,UID:4138071d-1ff8-11ea-a994-fa163e34d433,ResourceVersion:15006052,Generation:0,CreationTimestamp:2019-12-16 11:36:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1d38227e-1ff8-11ea-a994-fa163e34d433 0xc00119c4c7 0xc00119c4c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6vf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6vf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dt6vf true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00119c530} {node.kubernetes.io/unreachable Exists NoExecute 0xc00119c550}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 11:36:09 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 16 11:36:10.812: INFO: Pod "nginx-deployment-85ddf47c5d-p8jzj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-p8jzj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-vlm26,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vlm26/pods/nginx-deployment-85ddf47c5d-p8jzj,UID:417d9a85-1ff8-11ea-a994-fa163e34d433,ResourceVersion:15006055,Generation:0,CreationTimestamp:2019-12-16 11:36:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1d38227e-1ff8-11ea-a994-fa163e34d433 0xc00119c5c7 0xc00119c5c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6vf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6vf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dt6vf true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00119c630} {node.kubernetes.io/unreachable Exists NoExecute 0xc00119c650}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 16 11:36:10.813: INFO: Pod "nginx-deployment-85ddf47c5d-rnd98" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-rnd98,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-vlm26,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vlm26/pods/nginx-deployment-85ddf47c5d-rnd98,UID:417d53f4-1ff8-11ea-a994-fa163e34d433,ResourceVersion:15006057,Generation:0,CreationTimestamp:2019-12-16 11:36:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1d38227e-1ff8-11ea-a994-fa163e34d433 0xc00119c6b0 0xc00119c6b1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6vf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6vf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dt6vf true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00119c710} {node.kubernetes.io/unreachable Exists NoExecute 0xc00119c730}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:36:10.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-vlm26" for this suite. Dec 16 11:37:14.080: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:37:15.388: INFO: namespace: e2e-tests-deployment-vlm26, resource: bindings, ignored listing per whitelist Dec 16 11:37:15.428: INFO: namespace e2e-tests-deployment-vlm26 deletion completed in 1m4.322154318s • [SLOW TEST:126.679 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:37:15.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-68c27f54-1ff8-11ea-9388-0242ac110004 STEP: Creating a pod to test consume secrets Dec 16 11:37:15.968: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-68d5eced-1ff8-11ea-9388-0242ac110004" in namespace "e2e-tests-projected-xz4wq" to be "success or failure" Dec 16 11:37:16.205: INFO: Pod "pod-projected-secrets-68d5eced-1ff8-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 236.913992ms Dec 16 11:37:18.901: INFO: Pod "pod-projected-secrets-68d5eced-1ff8-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.933432011s Dec 16 11:37:20.930: INFO: Pod "pod-projected-secrets-68d5eced-1ff8-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.962638592s Dec 16 11:37:23.034: INFO: Pod "pod-projected-secrets-68d5eced-1ff8-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.065754859s Dec 16 11:37:25.225: INFO: Pod "pod-projected-secrets-68d5eced-1ff8-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.257162782s Dec 16 11:37:27.250: INFO: Pod "pod-projected-secrets-68d5eced-1ff8-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.28197423s Dec 16 11:37:30.138: INFO: Pod "pod-projected-secrets-68d5eced-1ff8-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 14.169906492s Dec 16 11:37:32.162: INFO: Pod "pod-projected-secrets-68d5eced-1ff8-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 16.194323343s Dec 16 11:37:34.189: INFO: Pod "pod-projected-secrets-68d5eced-1ff8-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 18.220908682s Dec 16 11:37:36.229: INFO: Pod "pod-projected-secrets-68d5eced-1ff8-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 20.260950772s Dec 16 11:37:39.446: INFO: Pod "pod-projected-secrets-68d5eced-1ff8-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 23.478060935s Dec 16 11:37:42.609: INFO: Pod "pod-projected-secrets-68d5eced-1ff8-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 26.641543118s Dec 16 11:37:44.639: INFO: Pod "pod-projected-secrets-68d5eced-1ff8-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 28.671241319s Dec 16 11:37:46.678: INFO: Pod "pod-projected-secrets-68d5eced-1ff8-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 30.710006423s Dec 16 11:37:48.695: INFO: Pod "pod-projected-secrets-68d5eced-1ff8-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.726943299s STEP: Saw pod success Dec 16 11:37:48.695: INFO: Pod "pod-projected-secrets-68d5eced-1ff8-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 11:37:48.704: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-68d5eced-1ff8-11ea-9388-0242ac110004 container projected-secret-volume-test: STEP: delete the pod Dec 16 11:37:48.846: INFO: Waiting for pod pod-projected-secrets-68d5eced-1ff8-11ea-9388-0242ac110004 to disappear Dec 16 11:37:48.908: INFO: Pod pod-projected-secrets-68d5eced-1ff8-11ea-9388-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:37:48.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xz4wq" for this suite. Dec 16 11:37:54.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:37:55.154: INFO: namespace: e2e-tests-projected-xz4wq, resource: bindings, ignored listing per whitelist Dec 16 11:37:55.163: INFO: namespace e2e-tests-projected-xz4wq deletion completed in 6.242171388s • [SLOW TEST:39.735 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:37:55.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-cnvvx/configmap-test-805979b4-1ff8-11ea-9388-0242ac110004 STEP: Creating a pod to test consume configMaps Dec 16 11:37:55.429: INFO: Waiting up to 5m0s for pod "pod-configmaps-805ae1cd-1ff8-11ea-9388-0242ac110004" in namespace "e2e-tests-configmap-cnvvx" to be "success or failure" Dec 16 11:37:55.533: INFO: Pod "pod-configmaps-805ae1cd-1ff8-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 103.957035ms Dec 16 11:37:57.743: INFO: Pod "pod-configmaps-805ae1cd-1ff8-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.31376296s Dec 16 11:37:59.772: INFO: Pod "pod-configmaps-805ae1cd-1ff8-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.342393885s Dec 16 11:38:01.808: INFO: Pod "pod-configmaps-805ae1cd-1ff8-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.378846603s Dec 16 11:38:04.390: INFO: Pod "pod-configmaps-805ae1cd-1ff8-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.96046527s Dec 16 11:38:06.410: INFO: Pod "pod-configmaps-805ae1cd-1ff8-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.980315218s Dec 16 11:38:08.430: INFO: Pod "pod-configmaps-805ae1cd-1ff8-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.000888966s STEP: Saw pod success Dec 16 11:38:08.431: INFO: Pod "pod-configmaps-805ae1cd-1ff8-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 11:38:08.444: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-805ae1cd-1ff8-11ea-9388-0242ac110004 container env-test: STEP: delete the pod Dec 16 11:38:09.114: INFO: Waiting for pod pod-configmaps-805ae1cd-1ff8-11ea-9388-0242ac110004 to disappear Dec 16 11:38:09.139: INFO: Pod pod-configmaps-805ae1cd-1ff8-11ea-9388-0242ac110004 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:38:09.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-cnvvx" for this suite. Dec 16 11:38:15.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:38:15.437: INFO: namespace: e2e-tests-configmap-cnvvx, resource: bindings, ignored listing per whitelist Dec 16 11:38:15.486: INFO: namespace e2e-tests-configmap-cnvvx deletion completed in 6.339626863s • [SLOW TEST:20.323 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:38:15.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-fkktk STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-fkktk STEP: Deleting pre-stop pod Dec 16 11:38:40.936: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:38:40.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-fkktk" for this suite. Dec 16 11:39:23.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:39:23.249: INFO: namespace: e2e-tests-prestop-fkktk, resource: bindings, ignored listing per whitelist Dec 16 11:39:23.255: INFO: namespace e2e-tests-prestop-fkktk deletion completed in 42.268922636s • [SLOW TEST:67.769 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:39:23.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-b4d666f5-1ff8-11ea-9388-0242ac110004 STEP: Creating a pod to test consume secrets Dec 16 11:39:23.486: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b4d72fa8-1ff8-11ea-9388-0242ac110004" in namespace "e2e-tests-projected-b2hhs" to be "success or failure" Dec 16 11:39:23.495: INFO: Pod "pod-projected-secrets-b4d72fa8-1ff8-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.850508ms Dec 16 11:39:25.515: INFO: Pod "pod-projected-secrets-b4d72fa8-1ff8-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028314527s Dec 16 11:39:27.556: INFO: Pod "pod-projected-secrets-b4d72fa8-1ff8-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069963164s Dec 16 11:39:29.671: INFO: Pod "pod-projected-secrets-b4d72fa8-1ff8-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.184989719s Dec 16 11:39:31.685: INFO: Pod "pod-projected-secrets-b4d72fa8-1ff8-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.198588027s Dec 16 11:39:33.699: INFO: Pod "pod-projected-secrets-b4d72fa8-1ff8-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.212723318s Dec 16 11:39:35.871: INFO: Pod "pod-projected-secrets-b4d72fa8-1ff8-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.384881124s STEP: Saw pod success Dec 16 11:39:35.872: INFO: Pod "pod-projected-secrets-b4d72fa8-1ff8-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 11:39:35.895: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-b4d72fa8-1ff8-11ea-9388-0242ac110004 container projected-secret-volume-test: STEP: delete the pod Dec 16 11:39:36.500: INFO: Waiting for pod pod-projected-secrets-b4d72fa8-1ff8-11ea-9388-0242ac110004 to disappear Dec 16 11:39:36.535: INFO: Pod pod-projected-secrets-b4d72fa8-1ff8-11ea-9388-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:39:36.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-b2hhs" for this suite. Dec 16 11:39:42.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:39:42.892: INFO: namespace: e2e-tests-projected-b2hhs, resource: bindings, ignored listing per whitelist Dec 16 11:39:42.910: INFO: namespace e2e-tests-projected-b2hhs deletion completed in 6.352914703s • [SLOW TEST:19.655 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:39:42.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-wspdz in namespace e2e-tests-proxy-p64sh I1216 11:39:43.327009 8 runners.go:184] Created replication controller with name: proxy-service-wspdz, namespace: e2e-tests-proxy-p64sh, replica count: 1 I1216 11:39:44.379230 8 runners.go:184] proxy-service-wspdz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1216 11:39:45.380412 8 runners.go:184] proxy-service-wspdz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1216 11:39:46.381563 8 runners.go:184] proxy-service-wspdz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1216 11:39:47.382590 8 runners.go:184] proxy-service-wspdz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1216 11:39:48.383433 8 runners.go:184] proxy-service-wspdz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1216 11:39:49.384356 8 runners.go:184] proxy-service-wspdz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1216 11:39:50.385109 8 runners.go:184] proxy-service-wspdz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1216 11:39:51.385962 8 runners.go:184] proxy-service-wspdz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1216 11:39:52.386789 8 runners.go:184] proxy-service-wspdz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1216 11:39:53.387436 8 runners.go:184] proxy-service-wspdz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1216 11:39:54.388672 8 runners.go:184] proxy-service-wspdz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1216 11:39:55.389560 8 runners.go:184] proxy-service-wspdz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1216 11:39:56.390278 8 runners.go:184] proxy-service-wspdz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1216 11:39:57.391015 8 runners.go:184] proxy-service-wspdz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1216 11:39:58.391685 8 runners.go:184] proxy-service-wspdz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1216 11:39:59.392399 8 runners.go:184] proxy-service-wspdz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1216 11:40:00.393209 8 runners.go:184] proxy-service-wspdz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1216 11:40:01.393890 8 runners.go:184] proxy-service-wspdz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1216 11:40:02.394980 8 runners.go:184] proxy-service-wspdz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1216 11:40:03.395783 8 runners.go:184] proxy-service-wspdz Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Dec 16 11:40:03.419: INFO: setup took 20.245384412s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Dec 16 11:40:03.483: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-p64sh/pods/proxy-service-wspdz-gntzk:162/proxy/: bar (200; 63.012204ms) Dec 16 11:40:03.483: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-p64sh/pods/http:proxy-service-wspdz-gntzk:160/proxy/: foo (200; 63.232226ms) Dec 16 11:40:03.483: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-p64sh/pods/proxy-service-wspdz-gntzk:160/proxy/: foo (200; 63.644648ms) Dec 16 11:40:03.486: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-p64sh/pods/proxy-service-wspdz-gntzk/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 16 11:40:19.775: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d660c73d-1ff8-11ea-9388-0242ac110004" in namespace "e2e-tests-projected-x279g" to be "success or failure" Dec 16 11:40:19.860: INFO: Pod "downwardapi-volume-d660c73d-1ff8-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 84.471535ms Dec 16 11:40:22.099: INFO: Pod "downwardapi-volume-d660c73d-1ff8-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.323151639s Dec 16 11:40:24.128: INFO: Pod "downwardapi-volume-d660c73d-1ff8-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.352068619s Dec 16 11:40:28.606: INFO: Pod "downwardapi-volume-d660c73d-1ff8-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.830324151s Dec 16 11:40:30.642: INFO: Pod "downwardapi-volume-d660c73d-1ff8-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.866895095s Dec 16 11:40:32.670: INFO: Pod "downwardapi-volume-d660c73d-1ff8-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 12.894787138s Dec 16 11:40:34.889: INFO: Pod "downwardapi-volume-d660c73d-1ff8-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.113280431s STEP: Saw pod success Dec 16 11:40:34.889: INFO: Pod "downwardapi-volume-d660c73d-1ff8-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 11:40:34.907: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-d660c73d-1ff8-11ea-9388-0242ac110004 container client-container: STEP: delete the pod Dec 16 11:40:35.013: INFO: Waiting for pod downwardapi-volume-d660c73d-1ff8-11ea-9388-0242ac110004 to disappear Dec 16 11:40:35.025: INFO: Pod downwardapi-volume-d660c73d-1ff8-11ea-9388-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:40:35.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-x279g" for this suite. Dec 16 11:40:41.110: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:40:41.207: INFO: namespace: e2e-tests-projected-x279g, resource: bindings, ignored listing per whitelist Dec 16 11:40:41.218: INFO: namespace e2e-tests-projected-x279g deletion completed in 6.185138193s • [SLOW TEST:21.649 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:40:41.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Dec 16 11:43:49.553: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:43:49.648: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:43:51.649: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:43:51.664: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:43:53.649: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:43:53.664: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:43:55.649: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:43:55.668: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:43:57.651: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:43:57.681: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:43:59.650: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:43:59.668: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:44:01.649: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:44:01.669: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:44:03.649: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:44:03.668: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:44:05.650: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:44:05.667: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:44:07.650: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:44:07.665: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:44:09.649: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:44:09.662: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:44:11.649: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:44:11.668: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:44:13.649: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:44:13.666: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:44:15.649: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:44:15.665: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:44:17.650: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:44:17.673: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:44:19.649: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:44:19.664: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:44:21.649: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:44:21.670: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:44:23.650: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:44:24.098: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:44:25.650: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:44:25.669: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:44:27.650: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:44:27.669: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:44:29.650: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:44:29.668: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:44:31.649: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:44:31.667: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:44:33.649: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:44:33.668: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:44:35.649: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:44:35.663: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:44:37.650: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:44:37.669: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:44:39.650: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:44:39.665: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:44:41.650: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:44:41.707: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:44:43.650: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:44:43.687: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:44:45.650: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:44:45.670: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:44:47.649: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:44:47.711: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:44:49.649: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:44:49.668: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:44:51.649: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:44:51.668: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:44:53.650: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:44:53.665: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:44:55.650: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:44:55.675: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:44:57.650: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:44:57.679: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:44:59.649: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:44:59.669: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:45:01.649: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:45:01.664: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:45:03.650: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:45:03.672: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:45:05.650: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:45:05.668: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:45:07.649: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:45:07.661: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:45:09.649: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:45:09.673: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:45:11.649: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:45:11.690: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:45:13.650: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:45:13.666: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:45:15.650: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:45:15.666: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:45:17.650: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:45:17.736: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:45:19.649: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:45:19.854: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:45:21.649: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:45:21.921: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:45:23.649: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:45:23.695: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:45:25.650: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:45:25.661: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:45:27.650: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:45:27.686: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:45:29.650: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:45:29.660: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:45:31.650: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:45:31.676: INFO: Pod pod-with-poststart-exec-hook still exists Dec 16 11:45:33.649: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 16 11:45:33.670: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:45:33.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-drrnr" for this suite. Dec 16 11:45:57.758: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:45:57.856: INFO: namespace: e2e-tests-container-lifecycle-hook-drrnr, resource: bindings, ignored listing per whitelist Dec 16 11:45:57.905: INFO: namespace e2e-tests-container-lifecycle-hook-drrnr deletion completed in 24.223818427s • [SLOW TEST:316.687 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:45:57.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 16 11:45:58.139: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a012ace2-1ff9-11ea-9388-0242ac110004" in namespace "e2e-tests-downward-api-6p9p7" to be "success or failure" Dec 16 11:45:58.152: INFO: Pod "downwardapi-volume-a012ace2-1ff9-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 13.081793ms Dec 16 11:46:00.488: INFO: Pod "downwardapi-volume-a012ace2-1ff9-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.348968094s Dec 16 11:46:02.520: INFO: Pod "downwardapi-volume-a012ace2-1ff9-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.380547627s Dec 16 11:46:04.709: INFO: Pod "downwardapi-volume-a012ace2-1ff9-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.570307243s Dec 16 11:46:06.798: INFO: Pod "downwardapi-volume-a012ace2-1ff9-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.658453757s Dec 16 11:46:08.826: INFO: Pod "downwardapi-volume-a012ace2-1ff9-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.687076645s Dec 16 11:46:10.843: INFO: Pod "downwardapi-volume-a012ace2-1ff9-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.703783661s STEP: Saw pod success Dec 16 11:46:10.843: INFO: Pod "downwardapi-volume-a012ace2-1ff9-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 11:46:10.855: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-a012ace2-1ff9-11ea-9388-0242ac110004 container client-container: STEP: delete the pod Dec 16 11:46:10.971: INFO: Waiting for pod downwardapi-volume-a012ace2-1ff9-11ea-9388-0242ac110004 to disappear Dec 16 11:46:10.984: INFO: Pod downwardapi-volume-a012ace2-1ff9-11ea-9388-0242ac110004 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:46:10.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-6p9p7" for this suite. Dec 16 11:46:17.081: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:46:17.144: INFO: namespace: e2e-tests-downward-api-6p9p7, resource: bindings, ignored listing per whitelist Dec 16 11:46:17.234: INFO: namespace e2e-tests-downward-api-6p9p7 deletion completed in 6.240762455s • [SLOW TEST:19.329 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:46:17.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command Dec 16 11:46:17.445: INFO: Waiting up to 5m0s for pod "var-expansion-ab941313-1ff9-11ea-9388-0242ac110004" in namespace "e2e-tests-var-expansion-t6n57" to be "success or failure" Dec 16 11:46:17.471: INFO: Pod "var-expansion-ab941313-1ff9-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 25.734148ms Dec 16 11:46:19.546: INFO: Pod "var-expansion-ab941313-1ff9-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100272931s Dec 16 11:46:21.571: INFO: Pod "var-expansion-ab941313-1ff9-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.125603673s Dec 16 11:46:23.889: INFO: Pod "var-expansion-ab941313-1ff9-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.443785166s Dec 16 11:46:25.928: INFO: Pod "var-expansion-ab941313-1ff9-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.482454957s Dec 16 11:46:27.951: INFO: Pod "var-expansion-ab941313-1ff9-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.504933407s Dec 16 11:46:29.972: INFO: Pod "var-expansion-ab941313-1ff9-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.526242556s STEP: Saw pod success Dec 16 11:46:29.972: INFO: Pod "var-expansion-ab941313-1ff9-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 11:46:29.979: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-ab941313-1ff9-11ea-9388-0242ac110004 container dapi-container: STEP: delete the pod Dec 16 11:46:30.290: INFO: Waiting for pod var-expansion-ab941313-1ff9-11ea-9388-0242ac110004 to disappear Dec 16 11:46:30.328: INFO: Pod var-expansion-ab941313-1ff9-11ea-9388-0242ac110004 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:46:30.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-t6n57" for this suite. Dec 16 11:46:36.439: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:46:36.696: INFO: namespace: e2e-tests-var-expansion-t6n57, resource: bindings, ignored listing per whitelist Dec 16 11:46:36.709: INFO: namespace e2e-tests-var-expansion-t6n57 deletion completed in 6.371416351s • [SLOW TEST:19.474 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:46:36.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:46:47.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-drgb2" for this suite. Dec 16 11:47:33.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:47:33.489: INFO: namespace: e2e-tests-kubelet-test-drgb2, resource: bindings, ignored listing per whitelist Dec 16 11:47:33.511: INFO: namespace e2e-tests-kubelet-test-drgb2 deletion completed in 46.28318758s • [SLOW TEST:56.802 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:47:33.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-jxm7g Dec 16 11:47:45.948: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-jxm7g STEP: checking the pod's current state and verifying that restartCount is present Dec 16 11:47:45.963: INFO: Initial restart count of pod liveness-http is 0 Dec 16 11:48:06.665: INFO: Restart count of pod e2e-tests-container-probe-jxm7g/liveness-http is now 1 (20.701738645s elapsed) Dec 16 11:48:25.129: INFO: Restart count of pod e2e-tests-container-probe-jxm7g/liveness-http is now 2 (39.165543822s elapsed) Dec 16 11:48:45.491: INFO: Restart count of pod e2e-tests-container-probe-jxm7g/liveness-http is now 3 (59.528167432s elapsed) Dec 16 11:49:04.429: INFO: Restart count of pod e2e-tests-container-probe-jxm7g/liveness-http is now 4 (1m18.465808561s elapsed) Dec 16 11:50:07.349: INFO: Restart count of pod e2e-tests-container-probe-jxm7g/liveness-http is now 5 (2m21.385944843s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:50:07.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-jxm7g" for this suite. Dec 16 11:50:13.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:50:13.650: INFO: namespace: e2e-tests-container-probe-jxm7g, resource: bindings, ignored listing per whitelist Dec 16 11:50:13.731: INFO: namespace e2e-tests-container-probe-jxm7g deletion completed in 6.255338054s • [SLOW TEST:160.219 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:50:13.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-38a1d594-1ffa-11ea-9388-0242ac110004 STEP: Creating a pod to test consume secrets Dec 16 11:50:14.213: INFO: Waiting up to 5m0s for pod "pod-secrets-38b527d7-1ffa-11ea-9388-0242ac110004" in namespace "e2e-tests-secrets-tps5k" to be "success or failure" Dec 16 11:50:14.230: INFO: Pod "pod-secrets-38b527d7-1ffa-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 16.750094ms Dec 16 11:50:16.258: INFO: Pod "pod-secrets-38b527d7-1ffa-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044618069s Dec 16 11:50:18.287: INFO: Pod "pod-secrets-38b527d7-1ffa-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073297902s Dec 16 11:50:20.305: INFO: Pod "pod-secrets-38b527d7-1ffa-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091221522s Dec 16 11:50:22.319: INFO: Pod "pod-secrets-38b527d7-1ffa-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.105965934s Dec 16 11:50:24.343: INFO: Pod "pod-secrets-38b527d7-1ffa-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.129469741s STEP: Saw pod success Dec 16 11:50:24.343: INFO: Pod "pod-secrets-38b527d7-1ffa-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 11:50:24.349: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-38b527d7-1ffa-11ea-9388-0242ac110004 container secret-volume-test: STEP: delete the pod Dec 16 11:50:25.774: INFO: Waiting for pod pod-secrets-38b527d7-1ffa-11ea-9388-0242ac110004 to disappear Dec 16 11:50:25.793: INFO: Pod pod-secrets-38b527d7-1ffa-11ea-9388-0242ac110004 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:50:25.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-tps5k" for this suite. Dec 16 11:50:31.856: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:50:32.073: INFO: namespace: e2e-tests-secrets-tps5k, resource: bindings, ignored listing per whitelist Dec 16 11:50:32.121: INFO: namespace e2e-tests-secrets-tps5k deletion completed in 6.312408961s • [SLOW TEST:18.389 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:50:32.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-43865653-1ffa-11ea-9388-0242ac110004 STEP: Creating a pod to test consume secrets Dec 16 11:50:32.357: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-43876ad9-1ffa-11ea-9388-0242ac110004" in namespace "e2e-tests-projected-kkgrp" to be "success or failure" Dec 16 11:50:32.371: INFO: Pod "pod-projected-secrets-43876ad9-1ffa-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 13.69399ms Dec 16 11:50:34.396: INFO: Pod "pod-projected-secrets-43876ad9-1ffa-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038891013s Dec 16 11:50:36.415: INFO: Pod "pod-projected-secrets-43876ad9-1ffa-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05765028s Dec 16 11:50:39.174: INFO: Pod "pod-projected-secrets-43876ad9-1ffa-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.817136383s Dec 16 11:50:41.187: INFO: Pod "pod-projected-secrets-43876ad9-1ffa-11ea-9388-0242ac110004": Phase="Running", Reason="", readiness=true. Elapsed: 8.829782291s Dec 16 11:50:43.197: INFO: Pod "pod-projected-secrets-43876ad9-1ffa-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.840273082s STEP: Saw pod success Dec 16 11:50:43.198: INFO: Pod "pod-projected-secrets-43876ad9-1ffa-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 11:50:43.203: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-43876ad9-1ffa-11ea-9388-0242ac110004 container projected-secret-volume-test: STEP: delete the pod Dec 16 11:50:44.165: INFO: Waiting for pod pod-projected-secrets-43876ad9-1ffa-11ea-9388-0242ac110004 to disappear Dec 16 11:50:44.509: INFO: Pod pod-projected-secrets-43876ad9-1ffa-11ea-9388-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:50:44.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-kkgrp" for this suite. Dec 16 11:50:50.696: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:50:50.785: INFO: namespace: e2e-tests-projected-kkgrp, resource: bindings, ignored listing per whitelist Dec 16 11:50:50.926: INFO: namespace e2e-tests-projected-kkgrp deletion completed in 6.296978257s • [SLOW TEST:18.805 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:50:50.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-bqg4f STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-bqg4f to expose endpoints map[] Dec 16 11:50:51.302: INFO: Get endpoints failed (43.009748ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Dec 16 11:50:52.316: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-bqg4f exposes endpoints map[] (1.05701499s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-bqg4f STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-bqg4f to expose endpoints map[pod1:[100]] Dec 16 11:50:56.589: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.239693879s elapsed, will retry) Dec 16 11:51:02.776: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-bqg4f exposes endpoints map[pod1:[100]] (10.426699775s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-bqg4f STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-bqg4f to expose endpoints map[pod1:[100] pod2:[101]] Dec 16 11:51:07.407: INFO: Unexpected endpoints: found map[4f70cf93-1ffa-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (4.590339593s elapsed, will retry) Dec 16 11:51:12.660: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-bqg4f exposes endpoints map[pod1:[100] pod2:[101]] (9.843427613s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-bqg4f STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-bqg4f to expose endpoints map[pod2:[101]] Dec 16 11:51:13.847: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-bqg4f exposes endpoints map[pod2:[101]] (1.177925387s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-bqg4f STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-bqg4f to expose endpoints map[] Dec 16 11:51:15.152: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-bqg4f exposes endpoints map[] (1.046453643s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:51:16.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-bqg4f" for this suite. Dec 16 11:51:41.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:51:41.376: INFO: namespace: e2e-tests-services-bqg4f, resource: bindings, ignored listing per whitelist Dec 16 11:51:41.393: INFO: namespace e2e-tests-services-bqg4f deletion completed in 24.622724064s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:50.466 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:51:41.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 16 11:51:41.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-fmcfb' Dec 16 11:51:43.453: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 16 11:51:43.454: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Dec 16 11:51:43.558: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Dec 16 11:51:43.562: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Dec 16 11:51:43.612: INFO: scanned /root for discovery docs: Dec 16 11:51:43.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-fmcfb' Dec 16 11:52:11.303: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Dec 16 11:52:11.304: INFO: stdout: "Created e2e-test-nginx-rc-94131ae0f4d65c74db9a06480258b358\nScaling up e2e-test-nginx-rc-94131ae0f4d65c74db9a06480258b358 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-94131ae0f4d65c74db9a06480258b358 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-94131ae0f4d65c74db9a06480258b358 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Dec 16 11:52:11.304: INFO: stdout: "Created e2e-test-nginx-rc-94131ae0f4d65c74db9a06480258b358\nScaling up e2e-test-nginx-rc-94131ae0f4d65c74db9a06480258b358 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-94131ae0f4d65c74db9a06480258b358 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-94131ae0f4d65c74db9a06480258b358 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Dec 16 11:52:11.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-fmcfb' Dec 16 11:52:11.524: INFO: stderr: "" Dec 16 11:52:11.524: INFO: stdout: "e2e-test-nginx-rc-94131ae0f4d65c74db9a06480258b358-qxq5g e2e-test-nginx-rc-vxqgm " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Dec 16 11:52:16.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-fmcfb' Dec 16 11:52:16.680: INFO: stderr: "" Dec 16 11:52:16.681: INFO: stdout: "e2e-test-nginx-rc-94131ae0f4d65c74db9a06480258b358-qxq5g " Dec 16 11:52:16.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-94131ae0f4d65c74db9a06480258b358-qxq5g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fmcfb' Dec 16 11:52:16.843: INFO: stderr: "" Dec 16 11:52:16.844: INFO: stdout: "true" Dec 16 11:52:16.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-94131ae0f4d65c74db9a06480258b358-qxq5g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fmcfb' Dec 16 11:52:16.999: INFO: stderr: "" Dec 16 11:52:16.999: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Dec 16 11:52:16.999: INFO: e2e-test-nginx-rc-94131ae0f4d65c74db9a06480258b358-qxq5g is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 Dec 16 11:52:16.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-fmcfb' Dec 16 11:52:17.159: INFO: stderr: "" Dec 16 11:52:17.159: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:52:17.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-fmcfb" for this suite. Dec 16 11:52:41.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:52:41.425: INFO: namespace: e2e-tests-kubectl-fmcfb, resource: bindings, ignored listing per whitelist Dec 16 11:52:41.478: INFO: namespace e2e-tests-kubectl-fmcfb deletion completed in 24.29432996s • [SLOW TEST:60.085 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:52:41.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:52:48.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-x657n" for this suite. Dec 16 11:52:54.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:52:54.349: INFO: namespace: e2e-tests-namespaces-x657n, resource: bindings, ignored listing per whitelist Dec 16 11:52:54.620: INFO: namespace e2e-tests-namespaces-x657n deletion completed in 6.395108762s STEP: Destroying namespace "e2e-tests-nsdeletetest-tbzhk" for this suite. Dec 16 11:52:54.624: INFO: Namespace e2e-tests-nsdeletetest-tbzhk was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-mlgqf" for this suite. Dec 16 11:53:00.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:53:00.875: INFO: namespace: e2e-tests-nsdeletetest-mlgqf, resource: bindings, ignored listing per whitelist Dec 16 11:53:00.922: INFO: namespace e2e-tests-nsdeletetest-mlgqf deletion completed in 6.2974621s • [SLOW TEST:19.443 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:53:00.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 16 11:53:01.283: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"9c38e772-1ffa-11ea-a994-fa163e34d433", Controller:(*bool)(0xc0012cdda2), BlockOwnerDeletion:(*bool)(0xc0012cdda3)}} Dec 16 11:53:01.435: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"9c3578d3-1ffa-11ea-a994-fa163e34d433", Controller:(*bool)(0xc00194e1ba), BlockOwnerDeletion:(*bool)(0xc00194e1bb)}} Dec 16 11:53:01.507: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"9c37360f-1ffa-11ea-a994-fa163e34d433", Controller:(*bool)(0xc0012cdf82), BlockOwnerDeletion:(*bool)(0xc0012cdf83)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:53:11.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-vw8jd" for this suite. Dec 16 11:53:17.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:53:17.990: INFO: namespace: e2e-tests-gc-vw8jd, resource: bindings, ignored listing per whitelist Dec 16 11:53:18.070: INFO: namespace e2e-tests-gc-vw8jd deletion completed in 6.269324511s • [SLOW TEST:17.147 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:53:18.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Dec 16 11:53:40.481: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 16 11:53:40.520: INFO: Pod pod-with-poststart-http-hook still exists Dec 16 11:53:42.521: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 16 11:53:42.543: INFO: Pod pod-with-poststart-http-hook still exists Dec 16 11:53:44.521: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 16 11:53:44.558: INFO: Pod pod-with-poststart-http-hook still exists Dec 16 11:53:46.523: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 16 11:53:46.556: INFO: Pod pod-with-poststart-http-hook still exists Dec 16 11:53:48.521: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 16 11:53:48.552: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:53:48.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-9bng7" for this suite. Dec 16 11:54:12.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:54:12.650: INFO: namespace: e2e-tests-container-lifecycle-hook-9bng7, resource: bindings, ignored listing per whitelist Dec 16 11:54:12.764: INFO: namespace e2e-tests-container-lifecycle-hook-9bng7 deletion completed in 24.196955426s • [SLOW TEST:54.694 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:54:12.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-c707c2f7-1ffa-11ea-9388-0242ac110004 STEP: Creating a pod to test consume configMaps Dec 16 11:54:12.994: INFO: Waiting up to 5m0s for pod "pod-configmaps-c709b679-1ffa-11ea-9388-0242ac110004" in namespace "e2e-tests-configmap-dnn5j" to be "success or failure" Dec 16 11:54:13.010: INFO: Pod "pod-configmaps-c709b679-1ffa-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 15.949299ms Dec 16 11:54:15.024: INFO: Pod "pod-configmaps-c709b679-1ffa-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030526641s Dec 16 11:54:17.050: INFO: Pod "pod-configmaps-c709b679-1ffa-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055622113s Dec 16 11:54:19.475: INFO: Pod "pod-configmaps-c709b679-1ffa-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.48066968s Dec 16 11:54:21.564: INFO: Pod "pod-configmaps-c709b679-1ffa-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.57057687s Dec 16 11:54:23.760: INFO: Pod "pod-configmaps-c709b679-1ffa-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.766161938s Dec 16 11:54:25.773: INFO: Pod "pod-configmaps-c709b679-1ffa-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.779586136s STEP: Saw pod success Dec 16 11:54:25.774: INFO: Pod "pod-configmaps-c709b679-1ffa-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 11:54:25.780: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-c709b679-1ffa-11ea-9388-0242ac110004 container configmap-volume-test: STEP: delete the pod Dec 16 11:54:26.111: INFO: Waiting for pod pod-configmaps-c709b679-1ffa-11ea-9388-0242ac110004 to disappear Dec 16 11:54:26.436: INFO: Pod pod-configmaps-c709b679-1ffa-11ea-9388-0242ac110004 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:54:26.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-dnn5j" for this suite. Dec 16 11:54:32.743: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:54:32.809: INFO: namespace: e2e-tests-configmap-dnn5j, resource: bindings, ignored listing per whitelist Dec 16 11:54:32.913: INFO: namespace e2e-tests-configmap-dnn5j deletion completed in 6.418960709s • [SLOW TEST:20.148 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:54:32.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W1216 11:54:37.621145 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 16 11:54:37.621: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:54:37.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-qzpcx" for this suite. Dec 16 11:54:43.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:54:43.906: INFO: namespace: e2e-tests-gc-qzpcx, resource: bindings, ignored listing per whitelist Dec 16 11:54:44.059: INFO: namespace e2e-tests-gc-qzpcx deletion completed in 6.428196818s • [SLOW TEST:11.146 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:54:44.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-d9c5f967-1ffa-11ea-9388-0242ac110004 STEP: Creating configMap with name cm-test-opt-upd-d9c5face-1ffa-11ea-9388-0242ac110004 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-d9c5f967-1ffa-11ea-9388-0242ac110004 STEP: Updating configmap cm-test-opt-upd-d9c5face-1ffa-11ea-9388-0242ac110004 STEP: Creating configMap with name cm-test-opt-create-d9c5faef-1ffa-11ea-9388-0242ac110004 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:55:05.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-2jjdp" for this suite. Dec 16 11:55:31.320: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:55:31.372: INFO: namespace: e2e-tests-configmap-2jjdp, resource: bindings, ignored listing per whitelist Dec 16 11:55:31.569: INFO: namespace e2e-tests-configmap-2jjdp deletion completed in 26.417204378s • [SLOW TEST:47.509 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:55:31.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 16 11:55:31.780: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:55:44.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-dl8pr" for this suite. Dec 16 11:56:38.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:56:38.722: INFO: namespace: e2e-tests-pods-dl8pr, resource: bindings, ignored listing per whitelist Dec 16 11:56:38.786: INFO: namespace e2e-tests-pods-dl8pr deletion completed in 54.399648434s • [SLOW TEST:67.216 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:56:38.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 16 11:56:39.084: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1e0bb9f3-1ffb-11ea-9388-0242ac110004" in namespace "e2e-tests-projected-c95wb" to be "success or failure" Dec 16 11:56:39.091: INFO: Pod "downwardapi-volume-1e0bb9f3-1ffb-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.269951ms Dec 16 11:56:41.147: INFO: Pod "downwardapi-volume-1e0bb9f3-1ffb-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062983927s Dec 16 11:56:43.174: INFO: Pod "downwardapi-volume-1e0bb9f3-1ffb-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089611324s Dec 16 11:56:45.197: INFO: Pod "downwardapi-volume-1e0bb9f3-1ffb-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11239464s Dec 16 11:56:47.348: INFO: Pod "downwardapi-volume-1e0bb9f3-1ffb-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.263915027s Dec 16 11:56:49.362: INFO: Pod "downwardapi-volume-1e0bb9f3-1ffb-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.27822873s Dec 16 11:56:51.388: INFO: Pod "downwardapi-volume-1e0bb9f3-1ffb-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.303481595s STEP: Saw pod success Dec 16 11:56:51.388: INFO: Pod "downwardapi-volume-1e0bb9f3-1ffb-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 11:56:51.405: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-1e0bb9f3-1ffb-11ea-9388-0242ac110004 container client-container: STEP: delete the pod Dec 16 11:56:51.640: INFO: Waiting for pod downwardapi-volume-1e0bb9f3-1ffb-11ea-9388-0242ac110004 to disappear Dec 16 11:56:51.693: INFO: Pod downwardapi-volume-1e0bb9f3-1ffb-11ea-9388-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:56:51.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-c95wb" for this suite. Dec 16 11:56:58.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:56:58.767: INFO: namespace: e2e-tests-projected-c95wb, resource: bindings, ignored listing per whitelist Dec 16 11:56:58.813: INFO: namespace e2e-tests-projected-c95wb deletion completed in 6.96491996s • [SLOW TEST:20.027 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:56:58.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:56:59.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-8dk9d" for this suite. Dec 16 11:57:05.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:57:05.394: INFO: namespace: e2e-tests-kubelet-test-8dk9d, resource: bindings, ignored listing per whitelist Dec 16 11:57:05.458: INFO: namespace e2e-tests-kubelet-test-8dk9d deletion completed in 6.237452686s • [SLOW TEST:6.644 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:57:05.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller Dec 16 11:57:05.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-rbsjr' Dec 16 11:57:06.208: INFO: stderr: "" Dec 16 11:57:06.208: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 16 11:57:06.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rbsjr' Dec 16 11:57:06.597: INFO: stderr: "" Dec 16 11:57:06.598: INFO: stdout: "update-demo-nautilus-bp6qn update-demo-nautilus-phrmg " Dec 16 11:57:06.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bp6qn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rbsjr' Dec 16 11:57:06.835: INFO: stderr: "" Dec 16 11:57:06.836: INFO: stdout: "" Dec 16 11:57:06.836: INFO: update-demo-nautilus-bp6qn is created but not running Dec 16 11:57:11.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rbsjr' Dec 16 11:57:12.064: INFO: stderr: "" Dec 16 11:57:12.065: INFO: stdout: "update-demo-nautilus-bp6qn update-demo-nautilus-phrmg " Dec 16 11:57:12.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bp6qn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rbsjr' Dec 16 11:57:12.356: INFO: stderr: "" Dec 16 11:57:12.356: INFO: stdout: "" Dec 16 11:57:12.356: INFO: update-demo-nautilus-bp6qn is created but not running Dec 16 11:57:17.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rbsjr' Dec 16 11:57:17.548: INFO: stderr: "" Dec 16 11:57:17.548: INFO: stdout: "update-demo-nautilus-bp6qn update-demo-nautilus-phrmg " Dec 16 11:57:17.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bp6qn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rbsjr' Dec 16 11:57:17.670: INFO: stderr: "" Dec 16 11:57:17.670: INFO: stdout: "" Dec 16 11:57:17.670: INFO: update-demo-nautilus-bp6qn is created but not running Dec 16 11:57:22.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rbsjr' Dec 16 11:57:22.851: INFO: stderr: "" Dec 16 11:57:22.852: INFO: stdout: "update-demo-nautilus-bp6qn update-demo-nautilus-phrmg " Dec 16 11:57:22.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bp6qn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rbsjr' Dec 16 11:57:22.996: INFO: stderr: "" Dec 16 11:57:22.996: INFO: stdout: "true" Dec 16 11:57:22.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bp6qn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rbsjr' Dec 16 11:57:23.132: INFO: stderr: "" Dec 16 11:57:23.132: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 16 11:57:23.132: INFO: validating pod update-demo-nautilus-bp6qn Dec 16 11:57:23.163: INFO: got data: { "image": "nautilus.jpg" } Dec 16 11:57:23.164: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 16 11:57:23.164: INFO: update-demo-nautilus-bp6qn is verified up and running Dec 16 11:57:23.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-phrmg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rbsjr' Dec 16 11:57:23.304: INFO: stderr: "" Dec 16 11:57:23.305: INFO: stdout: "true" Dec 16 11:57:23.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-phrmg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rbsjr' Dec 16 11:57:23.438: INFO: stderr: "" Dec 16 11:57:23.438: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 16 11:57:23.438: INFO: validating pod update-demo-nautilus-phrmg Dec 16 11:57:23.447: INFO: got data: { "image": "nautilus.jpg" } Dec 16 11:57:23.447: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 16 11:57:23.447: INFO: update-demo-nautilus-phrmg is verified up and running STEP: rolling-update to new replication controller Dec 16 11:57:23.450: INFO: scanned /root for discovery docs: Dec 16 11:57:23.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-rbsjr' Dec 16 11:58:01.404: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Dec 16 11:58:01.405: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 16 11:58:01.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rbsjr' Dec 16 11:58:01.596: INFO: stderr: "" Dec 16 11:58:01.596: INFO: stdout: "update-demo-kitten-c9bct update-demo-kitten-ghlz5 update-demo-nautilus-phrmg " STEP: Replicas for name=update-demo: expected=2 actual=3 Dec 16 11:58:06.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rbsjr' Dec 16 11:58:06.770: INFO: stderr: "" Dec 16 11:58:06.770: INFO: stdout: "update-demo-kitten-c9bct update-demo-kitten-ghlz5 " Dec 16 11:58:06.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-c9bct -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rbsjr' Dec 16 11:58:06.951: INFO: stderr: "" Dec 16 11:58:06.951: INFO: stdout: "true" Dec 16 11:58:06.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-c9bct -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rbsjr' Dec 16 11:58:07.082: INFO: stderr: "" Dec 16 11:58:07.083: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Dec 16 11:58:07.083: INFO: validating pod update-demo-kitten-c9bct Dec 16 11:58:07.106: INFO: got data: { "image": "kitten.jpg" } Dec 16 11:58:07.106: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Dec 16 11:58:07.106: INFO: update-demo-kitten-c9bct is verified up and running Dec 16 11:58:07.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ghlz5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rbsjr' Dec 16 11:58:07.244: INFO: stderr: "" Dec 16 11:58:07.244: INFO: stdout: "true" Dec 16 11:58:07.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ghlz5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rbsjr' Dec 16 11:58:07.348: INFO: stderr: "" Dec 16 11:58:07.348: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Dec 16 11:58:07.348: INFO: validating pod update-demo-kitten-ghlz5 Dec 16 11:58:07.363: INFO: got data: { "image": "kitten.jpg" } Dec 16 11:58:07.363: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Dec 16 11:58:07.363: INFO: update-demo-kitten-ghlz5 is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:58:07.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-rbsjr" for this suite. Dec 16 11:58:33.405: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:58:33.466: INFO: namespace: e2e-tests-kubectl-rbsjr, resource: bindings, ignored listing per whitelist Dec 16 11:58:33.692: INFO: namespace e2e-tests-kubectl-rbsjr deletion completed in 26.320778401s • [SLOW TEST:88.233 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:58:33.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server Dec 16 11:58:34.022: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:58:34.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-khj98" for this suite. Dec 16 11:58:40.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:58:40.384: INFO: namespace: e2e-tests-kubectl-khj98, resource: bindings, ignored listing per whitelist Dec 16 11:58:40.512: INFO: namespace e2e-tests-kubectl-khj98 deletion completed in 6.299794389s • [SLOW TEST:6.820 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:58:40.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-66b663df-1ffb-11ea-9388-0242ac110004 STEP: Creating a pod to test consume secrets Dec 16 11:58:41.016: INFO: Waiting up to 5m0s for pod "pod-secrets-66bc74ab-1ffb-11ea-9388-0242ac110004" in namespace "e2e-tests-secrets-tdkjj" to be "success or failure" Dec 16 11:58:41.040: INFO: Pod "pod-secrets-66bc74ab-1ffb-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 23.366856ms Dec 16 11:58:43.050: INFO: Pod "pod-secrets-66bc74ab-1ffb-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032989796s Dec 16 11:58:45.064: INFO: Pod "pod-secrets-66bc74ab-1ffb-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046968787s Dec 16 11:58:47.083: INFO: Pod "pod-secrets-66bc74ab-1ffb-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066243268s Dec 16 11:58:49.929: INFO: Pod "pod-secrets-66bc74ab-1ffb-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.912566664s Dec 16 11:58:51.969: INFO: Pod "pod-secrets-66bc74ab-1ffb-11ea-9388-0242ac110004": Phase="Running", Reason="", readiness=true. Elapsed: 10.952318341s Dec 16 11:58:53.987: INFO: Pod "pod-secrets-66bc74ab-1ffb-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.969937264s STEP: Saw pod success Dec 16 11:58:53.987: INFO: Pod "pod-secrets-66bc74ab-1ffb-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 11:58:53.991: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-66bc74ab-1ffb-11ea-9388-0242ac110004 container secret-volume-test: STEP: delete the pod Dec 16 11:58:54.641: INFO: Waiting for pod pod-secrets-66bc74ab-1ffb-11ea-9388-0242ac110004 to disappear Dec 16 11:58:54.662: INFO: Pod pod-secrets-66bc74ab-1ffb-11ea-9388-0242ac110004 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:58:54.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-tdkjj" for this suite. Dec 16 11:59:00.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:59:00.921: INFO: namespace: e2e-tests-secrets-tdkjj, resource: bindings, ignored listing per whitelist Dec 16 11:59:00.952: INFO: namespace e2e-tests-secrets-tdkjj deletion completed in 6.27699802s • [SLOW TEST:20.440 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:59:00.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-72e16cb2-1ffb-11ea-9388-0242ac110004 STEP: Creating secret with name secret-projected-all-test-volume-72e16c72-1ffb-11ea-9388-0242ac110004 STEP: Creating a pod to test Check all projections for projected volume plugin Dec 16 11:59:01.452: INFO: Waiting up to 5m0s for pod "projected-volume-72e16a26-1ffb-11ea-9388-0242ac110004" in namespace "e2e-tests-projected-f98s4" to be "success or failure" Dec 16 11:59:01.487: INFO: Pod "projected-volume-72e16a26-1ffb-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 34.789153ms Dec 16 11:59:03.659: INFO: Pod "projected-volume-72e16a26-1ffb-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206354456s Dec 16 11:59:05.688: INFO: Pod "projected-volume-72e16a26-1ffb-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.23624649s Dec 16 11:59:08.269: INFO: Pod "projected-volume-72e16a26-1ffb-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.817165061s Dec 16 11:59:10.315: INFO: Pod "projected-volume-72e16a26-1ffb-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.862562873s Dec 16 11:59:12.422: INFO: Pod "projected-volume-72e16a26-1ffb-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.970017712s Dec 16 11:59:14.449: INFO: Pod "projected-volume-72e16a26-1ffb-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.996769046s STEP: Saw pod success Dec 16 11:59:14.449: INFO: Pod "projected-volume-72e16a26-1ffb-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 11:59:14.459: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-72e16a26-1ffb-11ea-9388-0242ac110004 container projected-all-volume-test: STEP: delete the pod Dec 16 11:59:15.332: INFO: Waiting for pod projected-volume-72e16a26-1ffb-11ea-9388-0242ac110004 to disappear Dec 16 11:59:15.364: INFO: Pod projected-volume-72e16a26-1ffb-11ea-9388-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:59:15.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-f98s4" for this suite. Dec 16 11:59:21.505: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:59:21.576: INFO: namespace: e2e-tests-projected-f98s4, resource: bindings, ignored listing per whitelist Dec 16 11:59:21.631: INFO: namespace e2e-tests-projected-f98s4 deletion completed in 6.237604865s • [SLOW TEST:20.677 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:59:21.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Dec 16 11:59:21.963: INFO: Waiting up to 5m0s for pod "pod-7f27bf86-1ffb-11ea-9388-0242ac110004" in namespace "e2e-tests-emptydir-nxsl8" to be "success or failure" Dec 16 11:59:21.996: INFO: Pod "pod-7f27bf86-1ffb-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 32.850821ms Dec 16 11:59:24.150: INFO: Pod "pod-7f27bf86-1ffb-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.186897359s Dec 16 11:59:26.179: INFO: Pod "pod-7f27bf86-1ffb-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.21498081s Dec 16 11:59:28.818: INFO: Pod "pod-7f27bf86-1ffb-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.854002374s Dec 16 11:59:31.147: INFO: Pod "pod-7f27bf86-1ffb-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.183824646s Dec 16 11:59:33.904: INFO: Pod "pod-7f27bf86-1ffb-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.940345205s Dec 16 11:59:36.469: INFO: Pod "pod-7f27bf86-1ffb-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.505480561s STEP: Saw pod success Dec 16 11:59:36.470: INFO: Pod "pod-7f27bf86-1ffb-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 11:59:36.770: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-7f27bf86-1ffb-11ea-9388-0242ac110004 container test-container: STEP: delete the pod Dec 16 11:59:36.946: INFO: Waiting for pod pod-7f27bf86-1ffb-11ea-9388-0242ac110004 to disappear Dec 16 11:59:36.955: INFO: Pod pod-7f27bf86-1ffb-11ea-9388-0242ac110004 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 11:59:36.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-nxsl8" for this suite. Dec 16 11:59:43.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 11:59:43.129: INFO: namespace: e2e-tests-emptydir-nxsl8, resource: bindings, ignored listing per whitelist Dec 16 11:59:43.256: INFO: namespace e2e-tests-emptydir-nxsl8 deletion completed in 6.287171764s • [SLOW TEST:21.626 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 11:59:43.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode Dec 16 11:59:43.399: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-54j89" to be "success or failure" Dec 16 11:59:43.449: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 50.077299ms Dec 16 11:59:45.654: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.25419661s Dec 16 11:59:47.673: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.27398675s Dec 16 11:59:49.729: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.329167267s Dec 16 11:59:52.375: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.975606625s Dec 16 11:59:54.392: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.992805501s Dec 16 11:59:56.408: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 13.008567371s Dec 16 11:59:58.419: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 15.020042599s Dec 16 12:00:00.436: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.037071351s STEP: Saw pod success Dec 16 12:00:00.437: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Dec 16 12:00:00.451: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: STEP: delete the pod Dec 16 12:00:01.004: INFO: Waiting for pod pod-host-path-test to disappear Dec 16 12:00:01.061: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 12:00:01.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-54j89" for this suite. Dec 16 12:00:07.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 12:00:07.390: INFO: namespace: e2e-tests-hostpath-54j89, resource: bindings, ignored listing per whitelist Dec 16 12:00:07.416: INFO: namespace e2e-tests-hostpath-54j89 deletion completed in 6.253424981s • [SLOW TEST:24.159 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 12:00:07.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Dec 16 12:00:07.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-6492r' Dec 16 12:00:08.139: INFO: stderr: "" Dec 16 12:00:08.139: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Dec 16 12:00:09.160: INFO: Selector matched 1 pods for map[app:redis] Dec 16 12:00:09.160: INFO: Found 0 / 1 Dec 16 12:00:11.129: INFO: Selector matched 1 pods for map[app:redis] Dec 16 12:00:11.129: INFO: Found 0 / 1 Dec 16 12:00:11.159: INFO: Selector matched 1 pods for map[app:redis] Dec 16 12:00:11.159: INFO: Found 0 / 1 Dec 16 12:00:12.632: INFO: Selector matched 1 pods for map[app:redis] Dec 16 12:00:12.633: INFO: Found 0 / 1 Dec 16 12:00:13.156: INFO: Selector matched 1 pods for map[app:redis] Dec 16 12:00:13.156: INFO: Found 0 / 1 Dec 16 12:00:14.160: INFO: Selector matched 1 pods for map[app:redis] Dec 16 12:00:14.160: INFO: Found 0 / 1 Dec 16 12:00:15.161: INFO: Selector matched 1 pods for map[app:redis] Dec 16 12:00:15.161: INFO: Found 0 / 1 Dec 16 12:00:16.917: INFO: Selector matched 1 pods for map[app:redis] Dec 16 12:00:16.917: INFO: Found 0 / 1 Dec 16 12:00:18.859: INFO: Selector matched 1 pods for map[app:redis] Dec 16 12:00:18.859: INFO: Found 0 / 1 Dec 16 12:00:19.518: INFO: Selector matched 1 pods for map[app:redis] Dec 16 12:00:19.518: INFO: Found 0 / 1 Dec 16 12:00:20.189: INFO: Selector matched 1 pods for map[app:redis] Dec 16 12:00:20.190: INFO: Found 0 / 1 Dec 16 12:00:21.181: INFO: Selector matched 1 pods for map[app:redis] Dec 16 12:00:21.181: INFO: Found 0 / 1 Dec 16 12:00:22.275: INFO: Selector matched 1 pods for map[app:redis] Dec 16 12:00:22.276: INFO: Found 1 / 1 Dec 16 12:00:22.276: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Dec 16 12:00:22.285: INFO: Selector matched 1 pods for map[app:redis] Dec 16 12:00:22.286: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Dec 16 12:00:22.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-sbb24 --namespace=e2e-tests-kubectl-6492r -p {"metadata":{"annotations":{"x":"y"}}}' Dec 16 12:00:22.448: INFO: stderr: "" Dec 16 12:00:22.449: INFO: stdout: "pod/redis-master-sbb24 patched\n" STEP: checking annotations Dec 16 12:00:22.463: INFO: Selector matched 1 pods for map[app:redis] Dec 16 12:00:22.463: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 12:00:22.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-6492r" for this suite. Dec 16 12:00:46.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 12:00:46.592: INFO: namespace: e2e-tests-kubectl-6492r, resource: bindings, ignored listing per whitelist Dec 16 12:00:46.670: INFO: namespace e2e-tests-kubectl-6492r deletion completed in 24.200061698s • [SLOW TEST:39.254 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 12:00:46.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Dec 16 12:00:58.000: INFO: Successfully updated pod "labelsupdateb20a8da0-1ffb-11ea-9388-0242ac110004" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 12:01:00.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-2rq67" for this suite. Dec 16 12:01:22.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 12:01:22.386: INFO: namespace: e2e-tests-downward-api-2rq67, resource: bindings, ignored listing per whitelist Dec 16 12:01:22.517: INFO: namespace e2e-tests-downward-api-2rq67 deletion completed in 22.37964732s • [SLOW TEST:35.847 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 12:01:22.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-c75de9e7-1ffb-11ea-9388-0242ac110004 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-c75de9e7-1ffb-11ea-9388-0242ac110004 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 12:02:47.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gg9sv" for this suite. Dec 16 12:03:11.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 12:03:11.897: INFO: namespace: e2e-tests-projected-gg9sv, resource: bindings, ignored listing per whitelist Dec 16 12:03:12.001: INFO: namespace e2e-tests-projected-gg9sv deletion completed in 24.419462839s • [SLOW TEST:109.482 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 12:03:12.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 16 12:03:12.302: INFO: Waiting up to 5m0s for pod "downwardapi-volume-087cd580-1ffc-11ea-9388-0242ac110004" in namespace "e2e-tests-downward-api-mfsn4" to be "success or failure" Dec 16 12:03:12.309: INFO: Pod "downwardapi-volume-087cd580-1ffc-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.633379ms Dec 16 12:03:14.324: INFO: Pod "downwardapi-volume-087cd580-1ffc-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021387128s Dec 16 12:03:16.339: INFO: Pod "downwardapi-volume-087cd580-1ffc-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036778262s Dec 16 12:03:19.573: INFO: Pod "downwardapi-volume-087cd580-1ffc-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.270652076s Dec 16 12:03:21.588: INFO: Pod "downwardapi-volume-087cd580-1ffc-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.285857928s Dec 16 12:03:23.693: INFO: Pod "downwardapi-volume-087cd580-1ffc-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.390808458s Dec 16 12:03:26.053: INFO: Pod "downwardapi-volume-087cd580-1ffc-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.750246644s STEP: Saw pod success Dec 16 12:03:26.053: INFO: Pod "downwardapi-volume-087cd580-1ffc-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 12:03:26.072: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-087cd580-1ffc-11ea-9388-0242ac110004 container client-container: STEP: delete the pod Dec 16 12:03:26.475: INFO: Waiting for pod downwardapi-volume-087cd580-1ffc-11ea-9388-0242ac110004 to disappear Dec 16 12:03:26.532: INFO: Pod downwardapi-volume-087cd580-1ffc-11ea-9388-0242ac110004 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 12:03:26.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-mfsn4" for this suite. Dec 16 12:03:32.623: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 12:03:32.648: INFO: namespace: e2e-tests-downward-api-mfsn4, resource: bindings, ignored listing per whitelist Dec 16 12:03:32.749: INFO: namespace e2e-tests-downward-api-mfsn4 deletion completed in 6.187659888s • [SLOW TEST:20.748 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 12:03:32.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 16 12:03:33.036: INFO: Creating ReplicaSet my-hostname-basic-14db73b6-1ffc-11ea-9388-0242ac110004 Dec 16 12:03:33.072: INFO: Pod name my-hostname-basic-14db73b6-1ffc-11ea-9388-0242ac110004: Found 0 pods out of 1 Dec 16 12:03:38.136: INFO: Pod name my-hostname-basic-14db73b6-1ffc-11ea-9388-0242ac110004: Found 1 pods out of 1 Dec 16 12:03:38.136: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-14db73b6-1ffc-11ea-9388-0242ac110004" is running Dec 16 12:03:44.167: INFO: Pod "my-hostname-basic-14db73b6-1ffc-11ea-9388-0242ac110004-fwhdf" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-16 12:03:33 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-16 12:03:33 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-14db73b6-1ffc-11ea-9388-0242ac110004]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-16 12:03:33 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-14db73b6-1ffc-11ea-9388-0242ac110004]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-16 12:03:33 +0000 UTC Reason: Message:}]) Dec 16 12:03:44.167: INFO: Trying to dial the pod Dec 16 12:03:49.210: INFO: Controller my-hostname-basic-14db73b6-1ffc-11ea-9388-0242ac110004: Got expected result from replica 1 [my-hostname-basic-14db73b6-1ffc-11ea-9388-0242ac110004-fwhdf]: "my-hostname-basic-14db73b6-1ffc-11ea-9388-0242ac110004-fwhdf", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 12:03:49.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-c8tgb" for this suite. Dec 16 12:03:57.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 12:03:57.347: INFO: namespace: e2e-tests-replicaset-c8tgb, resource: bindings, ignored listing per whitelist Dec 16 12:03:57.401: INFO: namespace e2e-tests-replicaset-c8tgb deletion completed in 8.182512769s • [SLOW TEST:24.651 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 12:03:57.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition Dec 16 12:03:58.895: INFO: Waiting up to 5m0s for pod "var-expansion-24392b6f-1ffc-11ea-9388-0242ac110004" in namespace "e2e-tests-var-expansion-8mg4l" to be "success or failure" Dec 16 12:03:59.011: INFO: Pod "var-expansion-24392b6f-1ffc-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 116.134805ms Dec 16 12:04:01.354: INFO: Pod "var-expansion-24392b6f-1ffc-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.458921218s Dec 16 12:04:03.376: INFO: Pod "var-expansion-24392b6f-1ffc-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.481382225s Dec 16 12:04:06.391: INFO: Pod "var-expansion-24392b6f-1ffc-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.495646836s Dec 16 12:04:08.418: INFO: Pod "var-expansion-24392b6f-1ffc-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.522671629s Dec 16 12:04:10.433: INFO: Pod "var-expansion-24392b6f-1ffc-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.537965012s STEP: Saw pod success Dec 16 12:04:10.433: INFO: Pod "var-expansion-24392b6f-1ffc-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 12:04:10.439: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-24392b6f-1ffc-11ea-9388-0242ac110004 container dapi-container: STEP: delete the pod Dec 16 12:04:10.705: INFO: Waiting for pod var-expansion-24392b6f-1ffc-11ea-9388-0242ac110004 to disappear Dec 16 12:04:10.779: INFO: Pod var-expansion-24392b6f-1ffc-11ea-9388-0242ac110004 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 12:04:10.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-8mg4l" for this suite. Dec 16 12:04:18.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 12:04:18.380: INFO: namespace: e2e-tests-var-expansion-8mg4l, resource: bindings, ignored listing per whitelist Dec 16 12:04:18.459: INFO: namespace e2e-tests-var-expansion-8mg4l deletion completed in 7.664671402s • [SLOW TEST:21.058 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 12:04:18.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 16 12:04:18.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-ntjpq' Dec 16 12:04:20.886: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 16 12:04:20.886: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 Dec 16 12:04:25.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-ntjpq' Dec 16 12:04:25.563: INFO: stderr: "" Dec 16 12:04:25.563: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 12:04:25.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ntjpq" for this suite. Dec 16 12:04:31.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 12:04:31.732: INFO: namespace: e2e-tests-kubectl-ntjpq, resource: bindings, ignored listing per whitelist Dec 16 12:04:31.871: INFO: namespace e2e-tests-kubectl-ntjpq deletion completed in 6.281162007s • [SLOW TEST:13.410 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 12:04:31.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 16 12:04:32.130: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Dec 16 12:04:32.211: INFO: Pod name sample-pod: Found 0 pods out of 1 Dec 16 12:04:38.211: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Dec 16 12:04:42.276: INFO: Creating deployment "test-rolling-update-deployment" Dec 16 12:04:42.310: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Dec 16 12:04:42.443: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Dec 16 12:04:44.724: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Dec 16 12:04:44.736: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712094682, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712094682, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712094682, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712094682, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 16 12:04:46.748: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712094682, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712094682, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712094682, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712094682, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 16 12:04:48.819: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712094682, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712094682, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712094682, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712094682, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 16 12:04:50.774: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712094682, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712094682, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712094682, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712094682, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 16 12:04:52.774: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712094682, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712094682, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712094692, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712094682, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 16 12:04:55.216: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Dec 16 12:04:55.268: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-bbrrc,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-bbrrc/deployments/test-rolling-update-deployment,UID:3e2294ec-1ffc-11ea-a994-fa163e34d433,ResourceVersion:15009725,Generation:1,CreationTimestamp:2019-12-16 12:04:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-16 12:04:42 +0000 UTC 2019-12-16 12:04:42 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-16 12:04:53 +0000 UTC 2019-12-16 12:04:42 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Dec 16 12:04:55.522: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-bbrrc,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-bbrrc/replicasets/test-rolling-update-deployment-75db98fb4c,UID:3e43de80-1ffc-11ea-a994-fa163e34d433,ResourceVersion:15009715,Generation:1,CreationTimestamp:2019-12-16 12:04:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 3e2294ec-1ffc-11ea-a994-fa163e34d433 0xc001ccfed7 0xc001ccfed8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Dec 16 12:04:55.522: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Dec 16 12:04:55.523: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-bbrrc,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-bbrrc/replicasets/test-rolling-update-controller,UID:38168ffd-1ffc-11ea-a994-fa163e34d433,ResourceVersion:15009724,Generation:2,CreationTimestamp:2019-12-16 12:04:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 3e2294ec-1ffc-11ea-a994-fa163e34d433 0xc001ccfe17 0xc001ccfe18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 16 12:04:55.538: INFO: Pod "test-rolling-update-deployment-75db98fb4c-5d4fb" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-5d4fb,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-bbrrc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bbrrc/pods/test-rolling-update-deployment-75db98fb4c-5d4fb,UID:3e6ec805-1ffc-11ea-a994-fa163e34d433,ResourceVersion:15009714,Generation:0,CreationTimestamp:2019-12-16 12:04:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 3e43de80-1ffc-11ea-a994-fa163e34d433 0xc00244c7b7 0xc00244c7b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-c89cx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c89cx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-c89cx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00244c820} {node.kubernetes.io/unreachable Exists NoExecute 0xc00244c840}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:04:43 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:04:52 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:04:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:04:42 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2019-12-16 12:04:43 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-16 12:04:51 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://9f92853945ff72df33a397286d9772e97c41b3d3eebab09b5d541b906739aae4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 12:04:55.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-bbrrc" for this suite. Dec 16 12:05:05.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 12:05:05.658: INFO: namespace: e2e-tests-deployment-bbrrc, resource: bindings, ignored listing per whitelist Dec 16 12:05:05.726: INFO: namespace e2e-tests-deployment-bbrrc deletion completed in 10.180708533s • [SLOW TEST:33.855 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 12:05:05.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-4c2e194d-1ffc-11ea-9388-0242ac110004 STEP: Creating a pod to test consume secrets Dec 16 12:05:05.947: INFO: Waiting up to 5m0s for pod "pod-secrets-4c3a042d-1ffc-11ea-9388-0242ac110004" in namespace "e2e-tests-secrets-l28fn" to be "success or failure" Dec 16 12:05:05.965: INFO: Pod "pod-secrets-4c3a042d-1ffc-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 18.152261ms Dec 16 12:05:08.010: INFO: Pod "pod-secrets-4c3a042d-1ffc-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062715526s Dec 16 12:05:10.039: INFO: Pod "pod-secrets-4c3a042d-1ffc-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091647218s Dec 16 12:05:12.287: INFO: Pod "pod-secrets-4c3a042d-1ffc-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.339792709s Dec 16 12:05:14.324: INFO: Pod "pod-secrets-4c3a042d-1ffc-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.376772932s Dec 16 12:05:16.368: INFO: Pod "pod-secrets-4c3a042d-1ffc-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.421096913s Dec 16 12:05:18.396: INFO: Pod "pod-secrets-4c3a042d-1ffc-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.448638809s STEP: Saw pod success Dec 16 12:05:18.396: INFO: Pod "pod-secrets-4c3a042d-1ffc-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 12:05:18.415: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-4c3a042d-1ffc-11ea-9388-0242ac110004 container secret-volume-test: STEP: delete the pod Dec 16 12:05:19.149: INFO: Waiting for pod pod-secrets-4c3a042d-1ffc-11ea-9388-0242ac110004 to disappear Dec 16 12:05:19.165: INFO: Pod pod-secrets-4c3a042d-1ffc-11ea-9388-0242ac110004 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 12:05:19.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-l28fn" for this suite. Dec 16 12:05:27.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 12:05:27.400: INFO: namespace: e2e-tests-secrets-l28fn, resource: bindings, ignored listing per whitelist Dec 16 12:05:27.408: INFO: namespace e2e-tests-secrets-l28fn deletion completed in 8.231909813s • [SLOW TEST:21.681 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 12:05:27.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Dec 16 12:05:28.840: INFO: Pod name wrapped-volume-race-59baaf5e-1ffc-11ea-9388-0242ac110004: Found 0 pods out of 5 Dec 16 12:05:33.873: INFO: Pod name wrapped-volume-race-59baaf5e-1ffc-11ea-9388-0242ac110004: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-59baaf5e-1ffc-11ea-9388-0242ac110004 in namespace e2e-tests-emptydir-wrapper-jfzw9, will wait for the garbage collector to delete the pods Dec 16 12:07:38.088: INFO: Deleting ReplicationController wrapped-volume-race-59baaf5e-1ffc-11ea-9388-0242ac110004 took: 65.118929ms Dec 16 12:07:38.490: INFO: Terminating ReplicationController wrapped-volume-race-59baaf5e-1ffc-11ea-9388-0242ac110004 pods took: 401.783466ms STEP: Creating RC which spawns configmap-volume pods Dec 16 12:08:23.759: INFO: Pod name wrapped-volume-race-c20a9eda-1ffc-11ea-9388-0242ac110004: Found 0 pods out of 5 Dec 16 12:08:28.781: INFO: Pod name wrapped-volume-race-c20a9eda-1ffc-11ea-9388-0242ac110004: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-c20a9eda-1ffc-11ea-9388-0242ac110004 in namespace e2e-tests-emptydir-wrapper-jfzw9, will wait for the garbage collector to delete the pods Dec 16 12:10:45.174: INFO: Deleting ReplicationController wrapped-volume-race-c20a9eda-1ffc-11ea-9388-0242ac110004 took: 104.594929ms Dec 16 12:10:45.575: INFO: Terminating ReplicationController wrapped-volume-race-c20a9eda-1ffc-11ea-9388-0242ac110004 pods took: 401.298293ms STEP: Creating RC which spawns configmap-volume pods Dec 16 12:11:33.440: INFO: Pod name wrapped-volume-race-330afabd-1ffd-11ea-9388-0242ac110004: Found 0 pods out of 5 Dec 16 12:11:38.515: INFO: Pod name wrapped-volume-race-330afabd-1ffd-11ea-9388-0242ac110004: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-330afabd-1ffd-11ea-9388-0242ac110004 in namespace e2e-tests-emptydir-wrapper-jfzw9, will wait for the garbage collector to delete the pods Dec 16 12:14:04.723: INFO: Deleting ReplicationController wrapped-volume-race-330afabd-1ffd-11ea-9388-0242ac110004 took: 37.78515ms Dec 16 12:14:05.124: INFO: Terminating ReplicationController wrapped-volume-race-330afabd-1ffd-11ea-9388-0242ac110004 pods took: 401.483356ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 12:14:55.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-jfzw9" for this suite. Dec 16 12:15:05.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 12:15:05.539: INFO: namespace: e2e-tests-emptydir-wrapper-jfzw9, resource: bindings, ignored listing per whitelist Dec 16 12:15:05.601: INFO: namespace e2e-tests-emptydir-wrapper-jfzw9 deletion completed in 10.372718769s • [SLOW TEST:578.193 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 12:15:05.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-b1cbcd86-1ffd-11ea-9388-0242ac110004 STEP: Creating a pod to test consume configMaps Dec 16 12:15:05.855: INFO: Waiting up to 5m0s for pod "pod-configmaps-b1ccd747-1ffd-11ea-9388-0242ac110004" in namespace "e2e-tests-configmap-lb7xt" to be "success or failure" Dec 16 12:15:05.881: INFO: Pod "pod-configmaps-b1ccd747-1ffd-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 25.443734ms Dec 16 12:15:10.994: INFO: Pod "pod-configmaps-b1ccd747-1ffd-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 5.138504996s Dec 16 12:15:13.044: INFO: Pod "pod-configmaps-b1ccd747-1ffd-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.188993396s Dec 16 12:15:15.079: INFO: Pod "pod-configmaps-b1ccd747-1ffd-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.224006377s Dec 16 12:15:17.094: INFO: Pod "pod-configmaps-b1ccd747-1ffd-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.239304006s Dec 16 12:15:19.112: INFO: Pod "pod-configmaps-b1ccd747-1ffd-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 13.25639172s Dec 16 12:15:21.975: INFO: Pod "pod-configmaps-b1ccd747-1ffd-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.120312512s STEP: Saw pod success Dec 16 12:15:21.976: INFO: Pod "pod-configmaps-b1ccd747-1ffd-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 12:15:22.413: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-b1ccd747-1ffd-11ea-9388-0242ac110004 container configmap-volume-test: STEP: delete the pod Dec 16 12:15:22.594: INFO: Waiting for pod pod-configmaps-b1ccd747-1ffd-11ea-9388-0242ac110004 to disappear Dec 16 12:15:22.687: INFO: Pod pod-configmaps-b1ccd747-1ffd-11ea-9388-0242ac110004 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 12:15:22.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-lb7xt" for this suite. Dec 16 12:15:28.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 12:15:29.153: INFO: namespace: e2e-tests-configmap-lb7xt, resource: bindings, ignored listing per whitelist Dec 16 12:15:29.215: INFO: namespace e2e-tests-configmap-lb7xt deletion completed in 6.498188749s • [SLOW TEST:23.614 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 12:15:29.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-gqp6 STEP: Creating a pod to test atomic-volume-subpath Dec 16 12:15:29.506: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-gqp6" in namespace "e2e-tests-subpath-js95d" to be "success or failure" Dec 16 12:15:29.523: INFO: Pod "pod-subpath-test-projected-gqp6": Phase="Pending", Reason="", readiness=false. Elapsed: 16.686285ms Dec 16 12:15:31.700: INFO: Pod "pod-subpath-test-projected-gqp6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19359193s Dec 16 12:15:33.737: INFO: Pod "pod-subpath-test-projected-gqp6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.230465468s Dec 16 12:15:36.464: INFO: Pod "pod-subpath-test-projected-gqp6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.958230977s Dec 16 12:15:38.494: INFO: Pod "pod-subpath-test-projected-gqp6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.987484339s Dec 16 12:15:40.522: INFO: Pod "pod-subpath-test-projected-gqp6": Phase="Pending", Reason="", readiness=false. Elapsed: 11.015611365s Dec 16 12:15:42.548: INFO: Pod "pod-subpath-test-projected-gqp6": Phase="Pending", Reason="", readiness=false. Elapsed: 13.041897711s Dec 16 12:15:44.576: INFO: Pod "pod-subpath-test-projected-gqp6": Phase="Pending", Reason="", readiness=false. Elapsed: 15.069619248s Dec 16 12:15:46.603: INFO: Pod "pod-subpath-test-projected-gqp6": Phase="Pending", Reason="", readiness=false. Elapsed: 17.096721361s Dec 16 12:15:48.622: INFO: Pod "pod-subpath-test-projected-gqp6": Phase="Running", Reason="", readiness=false. Elapsed: 19.115419352s Dec 16 12:15:50.670: INFO: Pod "pod-subpath-test-projected-gqp6": Phase="Running", Reason="", readiness=false. Elapsed: 21.163538304s Dec 16 12:15:52.691: INFO: Pod "pod-subpath-test-projected-gqp6": Phase="Running", Reason="", readiness=false. Elapsed: 23.184445269s Dec 16 12:15:54.705: INFO: Pod "pod-subpath-test-projected-gqp6": Phase="Running", Reason="", readiness=false. Elapsed: 25.198563041s Dec 16 12:15:56.719: INFO: Pod "pod-subpath-test-projected-gqp6": Phase="Running", Reason="", readiness=false. Elapsed: 27.212935909s Dec 16 12:15:58.750: INFO: Pod "pod-subpath-test-projected-gqp6": Phase="Running", Reason="", readiness=false. Elapsed: 29.244235444s Dec 16 12:16:00.810: INFO: Pod "pod-subpath-test-projected-gqp6": Phase="Running", Reason="", readiness=false. Elapsed: 31.304066753s Dec 16 12:16:02.833: INFO: Pod "pod-subpath-test-projected-gqp6": Phase="Running", Reason="", readiness=false. Elapsed: 33.326597279s Dec 16 12:16:04.910: INFO: Pod "pod-subpath-test-projected-gqp6": Phase="Running", Reason="", readiness=false. Elapsed: 35.403357795s Dec 16 12:16:06.942: INFO: Pod "pod-subpath-test-projected-gqp6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.435701105s STEP: Saw pod success Dec 16 12:16:06.942: INFO: Pod "pod-subpath-test-projected-gqp6" satisfied condition "success or failure" Dec 16 12:16:06.953: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-gqp6 container test-container-subpath-projected-gqp6: STEP: delete the pod Dec 16 12:16:08.062: INFO: Waiting for pod pod-subpath-test-projected-gqp6 to disappear Dec 16 12:16:08.274: INFO: Pod pod-subpath-test-projected-gqp6 no longer exists STEP: Deleting pod pod-subpath-test-projected-gqp6 Dec 16 12:16:08.274: INFO: Deleting pod "pod-subpath-test-projected-gqp6" in namespace "e2e-tests-subpath-js95d" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 12:16:08.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-js95d" for this suite. Dec 16 12:16:16.340: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 12:16:16.689: INFO: namespace: e2e-tests-subpath-js95d, resource: bindings, ignored listing per whitelist Dec 16 12:16:16.694: INFO: namespace e2e-tests-subpath-js95d deletion completed in 8.39913147s • [SLOW TEST:47.479 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 12:16:16.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Dec 16 12:16:16.934: INFO: Waiting up to 5m0s for pod "downward-api-dc1cab32-1ffd-11ea-9388-0242ac110004" in namespace "e2e-tests-downward-api-cdtl2" to be "success or failure" Dec 16 12:16:17.186: INFO: Pod "downward-api-dc1cab32-1ffd-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 252.041969ms Dec 16 12:16:19.550: INFO: Pod "downward-api-dc1cab32-1ffd-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.615720858s Dec 16 12:16:21.568: INFO: Pod "downward-api-dc1cab32-1ffd-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.633688599s Dec 16 12:16:23.901: INFO: Pod "downward-api-dc1cab32-1ffd-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.967039554s Dec 16 12:16:25.923: INFO: Pod "downward-api-dc1cab32-1ffd-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.98844681s Dec 16 12:16:27.941: INFO: Pod "downward-api-dc1cab32-1ffd-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.007068742s STEP: Saw pod success Dec 16 12:16:27.942: INFO: Pod "downward-api-dc1cab32-1ffd-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 12:16:27.948: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-dc1cab32-1ffd-11ea-9388-0242ac110004 container dapi-container: STEP: delete the pod Dec 16 12:16:28.058: INFO: Waiting for pod downward-api-dc1cab32-1ffd-11ea-9388-0242ac110004 to disappear Dec 16 12:16:28.566: INFO: Pod downward-api-dc1cab32-1ffd-11ea-9388-0242ac110004 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 12:16:28.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-cdtl2" for this suite. Dec 16 12:16:35.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 12:16:35.767: INFO: namespace: e2e-tests-downward-api-cdtl2, resource: bindings, ignored listing per whitelist Dec 16 12:16:35.798: INFO: namespace e2e-tests-downward-api-cdtl2 deletion completed in 6.388846348s • [SLOW TEST:19.103 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 12:16:35.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Dec 16 12:16:36.105: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 12:16:54.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-77pk5" for this suite. Dec 16 12:17:00.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 12:17:01.345: INFO: namespace: e2e-tests-init-container-77pk5, resource: bindings, ignored listing per whitelist Dec 16 12:17:01.609: INFO: namespace e2e-tests-init-container-77pk5 deletion completed in 7.201419749s • [SLOW TEST:25.812 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 12:17:01.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-4ths5 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet Dec 16 12:17:02.283: INFO: Found 0 stateful pods, waiting for 3 Dec 16 12:17:12.309: INFO: Found 1 stateful pods, waiting for 3 Dec 16 12:17:22.322: INFO: Found 2 stateful pods, waiting for 3 Dec 16 12:17:32.794: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 16 12:17:32.794: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 16 12:17:32.794: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Dec 16 12:17:42.320: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 16 12:17:42.320: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 16 12:17:42.320: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Dec 16 12:17:42.375: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Dec 16 12:17:52.648: INFO: Updating stateful set ss2 Dec 16 12:17:52.664: INFO: Waiting for Pod e2e-tests-statefulset-4ths5/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Dec 16 12:18:03.511: INFO: Found 2 stateful pods, waiting for 3 Dec 16 12:18:13.680: INFO: Found 2 stateful pods, waiting for 3 Dec 16 12:18:24.159: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 16 12:18:24.159: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 16 12:18:24.159: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Dec 16 12:18:33.536: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 16 12:18:33.536: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 16 12:18:33.536: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Dec 16 12:18:33.597: INFO: Updating stateful set ss2 Dec 16 12:18:33.653: INFO: Waiting for Pod e2e-tests-statefulset-4ths5/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 16 12:18:44.001: INFO: Updating stateful set ss2 Dec 16 12:18:44.044: INFO: Waiting for StatefulSet e2e-tests-statefulset-4ths5/ss2 to complete update Dec 16 12:18:44.045: INFO: Waiting for Pod e2e-tests-statefulset-4ths5/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 16 12:18:54.252: INFO: Waiting for StatefulSet e2e-tests-statefulset-4ths5/ss2 to complete update Dec 16 12:18:54.252: INFO: Waiting for Pod e2e-tests-statefulset-4ths5/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 16 12:19:04.068: INFO: Waiting for StatefulSet e2e-tests-statefulset-4ths5/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Dec 16 12:19:14.075: INFO: Deleting all statefulset in ns e2e-tests-statefulset-4ths5 Dec 16 12:19:14.080: INFO: Scaling statefulset ss2 to 0 Dec 16 12:19:44.127: INFO: Waiting for statefulset status.replicas updated to 0 Dec 16 12:19:44.137: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 12:19:44.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-4ths5" for this suite. Dec 16 12:19:52.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 12:19:52.395: INFO: namespace: e2e-tests-statefulset-4ths5, resource: bindings, ignored listing per whitelist Dec 16 12:19:52.500: INFO: namespace e2e-tests-statefulset-4ths5 deletion completed in 8.249724984s • [SLOW TEST:170.889 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 12:19:52.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs Dec 16 12:19:52.742: INFO: Waiting up to 5m0s for pod "pod-5ccb1ae8-1ffe-11ea-9388-0242ac110004" in namespace "e2e-tests-emptydir-rv5q8" to be "success or failure" Dec 16 12:19:52.887: INFO: Pod "pod-5ccb1ae8-1ffe-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 144.506781ms Dec 16 12:19:55.025: INFO: Pod "pod-5ccb1ae8-1ffe-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.282438864s Dec 16 12:19:57.068: INFO: Pod "pod-5ccb1ae8-1ffe-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.324987317s Dec 16 12:19:59.351: INFO: Pod "pod-5ccb1ae8-1ffe-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.608686143s Dec 16 12:20:01.410: INFO: Pod "pod-5ccb1ae8-1ffe-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.666867762s Dec 16 12:20:03.425: INFO: Pod "pod-5ccb1ae8-1ffe-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.682370332s STEP: Saw pod success Dec 16 12:20:03.425: INFO: Pod "pod-5ccb1ae8-1ffe-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 12:20:03.432: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-5ccb1ae8-1ffe-11ea-9388-0242ac110004 container test-container: STEP: delete the pod Dec 16 12:20:03.518: INFO: Waiting for pod pod-5ccb1ae8-1ffe-11ea-9388-0242ac110004 to disappear Dec 16 12:20:03.528: INFO: Pod pod-5ccb1ae8-1ffe-11ea-9388-0242ac110004 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 12:20:03.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-rv5q8" for this suite. Dec 16 12:20:10.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 12:20:10.515: INFO: namespace: e2e-tests-emptydir-rv5q8, resource: bindings, ignored listing per whitelist Dec 16 12:20:10.932: INFO: namespace e2e-tests-emptydir-rv5q8 deletion completed in 7.396688136s • [SLOW TEST:18.431 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 12:20:10.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Dec 16 12:20:11.197: INFO: Waiting up to 5m0s for pod "downward-api-67c9be16-1ffe-11ea-9388-0242ac110004" in namespace "e2e-tests-downward-api-9txxm" to be "success or failure" Dec 16 12:20:11.207: INFO: Pod "downward-api-67c9be16-1ffe-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.043201ms Dec 16 12:20:13.575: INFO: Pod "downward-api-67c9be16-1ffe-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.377163714s Dec 16 12:20:15.593: INFO: Pod "downward-api-67c9be16-1ffe-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.395081415s Dec 16 12:20:18.578: INFO: Pod "downward-api-67c9be16-1ffe-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.380301495s Dec 16 12:20:20.603: INFO: Pod "downward-api-67c9be16-1ffe-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.40498464s Dec 16 12:20:22.626: INFO: Pod "downward-api-67c9be16-1ffe-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.428470865s Dec 16 12:20:24.648: INFO: Pod "downward-api-67c9be16-1ffe-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.450016463s STEP: Saw pod success Dec 16 12:20:24.648: INFO: Pod "downward-api-67c9be16-1ffe-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 12:20:24.652: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-67c9be16-1ffe-11ea-9388-0242ac110004 container dapi-container: STEP: delete the pod Dec 16 12:20:25.141: INFO: Waiting for pod downward-api-67c9be16-1ffe-11ea-9388-0242ac110004 to disappear Dec 16 12:20:25.158: INFO: Pod downward-api-67c9be16-1ffe-11ea-9388-0242ac110004 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 12:20:25.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-9txxm" for this suite. Dec 16 12:20:31.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 12:20:31.603: INFO: namespace: e2e-tests-downward-api-9txxm, resource: bindings, ignored listing per whitelist Dec 16 12:20:31.642: INFO: namespace e2e-tests-downward-api-9txxm deletion completed in 6.476061026s • [SLOW TEST:20.710 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 12:20:31.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-74697246-1ffe-11ea-9388-0242ac110004 STEP: Creating a pod to test consume configMaps Dec 16 12:20:32.589: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7484afc2-1ffe-11ea-9388-0242ac110004" in namespace "e2e-tests-projected-hzspb" to be "success or failure" Dec 16 12:20:32.700: INFO: Pod "pod-projected-configmaps-7484afc2-1ffe-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 110.249163ms Dec 16 12:20:34.782: INFO: Pod "pod-projected-configmaps-7484afc2-1ffe-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.192438522s Dec 16 12:20:36.823: INFO: Pod "pod-projected-configmaps-7484afc2-1ffe-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.232894775s Dec 16 12:20:38.875: INFO: Pod "pod-projected-configmaps-7484afc2-1ffe-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.284869017s Dec 16 12:20:40.917: INFO: Pod "pod-projected-configmaps-7484afc2-1ffe-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.326765712s Dec 16 12:20:42.961: INFO: Pod "pod-projected-configmaps-7484afc2-1ffe-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.370879121s STEP: Saw pod success Dec 16 12:20:42.961: INFO: Pod "pod-projected-configmaps-7484afc2-1ffe-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 12:20:42.968: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-7484afc2-1ffe-11ea-9388-0242ac110004 container projected-configmap-volume-test: STEP: delete the pod Dec 16 12:20:43.174: INFO: Waiting for pod pod-projected-configmaps-7484afc2-1ffe-11ea-9388-0242ac110004 to disappear Dec 16 12:20:43.183: INFO: Pod pod-projected-configmaps-7484afc2-1ffe-11ea-9388-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 12:20:43.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hzspb" for this suite. Dec 16 12:20:49.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 12:20:49.246: INFO: namespace: e2e-tests-projected-hzspb, resource: bindings, ignored listing per whitelist Dec 16 12:20:49.480: INFO: namespace e2e-tests-projected-hzspb deletion completed in 6.285048285s • [SLOW TEST:17.837 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 12:20:49.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-r2std Dec 16 12:21:01.824: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-r2std STEP: checking the pod's current state and verifying that restartCount is present Dec 16 12:21:01.832: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 12:25:03.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-r2std" for this suite. Dec 16 12:25:10.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 12:25:10.329: INFO: namespace: e2e-tests-container-probe-r2std, resource: bindings, ignored listing per whitelist Dec 16 12:25:10.598: INFO: namespace e2e-tests-container-probe-r2std deletion completed in 6.555858599s • [SLOW TEST:261.118 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 12:25:10.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info Dec 16 12:25:10.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Dec 16 12:25:12.995: INFO: stderr: "" Dec 16 12:25:12.995: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 12:25:12.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-4nd4v" for this suite. Dec 16 12:25:19.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 12:25:19.265: INFO: namespace: e2e-tests-kubectl-4nd4v, resource: bindings, ignored listing per whitelist Dec 16 12:25:19.275: INFO: namespace e2e-tests-kubectl-4nd4v deletion completed in 6.225936605s • [SLOW TEST:8.676 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 12:25:19.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 16 12:25:19.467: INFO: Creating deployment "test-recreate-deployment" Dec 16 12:25:19.478: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Dec 16 12:25:19.555: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created Dec 16 12:25:21.668: INFO: Waiting deployment "test-recreate-deployment" to complete Dec 16 12:25:21.675: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712095919, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712095919, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712095919, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712095919, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 16 12:25:23.694: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712095919, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712095919, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712095919, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712095919, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 16 12:25:26.306: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712095919, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712095919, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712095919, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712095919, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 16 12:25:27.904: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712095919, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712095919, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712095919, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712095919, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 16 12:25:29.689: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712095919, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712095919, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712095919, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712095919, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 16 12:25:31.693: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Dec 16 12:25:31.716: INFO: Updating deployment test-recreate-deployment Dec 16 12:25:31.716: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Dec 16 12:25:32.870: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-6zt4b,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6zt4b/deployments/test-recreate-deployment,UID:1f8d37ad-1fff-11ea-a994-fa163e34d433,ResourceVersion:15012155,Generation:2,CreationTimestamp:2019-12-16 12:25:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2019-12-16 12:25:32 +0000 UTC 2019-12-16 12:25:32 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-12-16 12:25:32 +0000 UTC 2019-12-16 12:25:19 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Dec 16 12:25:32.896: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-6zt4b,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6zt4b/replicasets/test-recreate-deployment-589c4bfd,UID:273a3013-1fff-11ea-a994-fa163e34d433,ResourceVersion:15012154,Generation:1,CreationTimestamp:2019-12-16 12:25:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 1f8d37ad-1fff-11ea-a994-fa163e34d433 0xc001cc7a9f 0xc001cc7ab0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 16 12:25:32.897: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Dec 16 12:25:32.897: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-6zt4b,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6zt4b/replicasets/test-recreate-deployment-5bf7f65dc,UID:1f929b44-1fff-11ea-a994-fa163e34d433,ResourceVersion:15012144,Generation:2,CreationTimestamp:2019-12-16 12:25:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 1f8d37ad-1fff-11ea-a994-fa163e34d433 0xc001f1a0f0 0xc001f1a0f1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 16 12:25:32.915: INFO: Pod "test-recreate-deployment-589c4bfd-ttjdt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-ttjdt,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-6zt4b,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6zt4b/pods/test-recreate-deployment-589c4bfd-ttjdt,UID:273efab3-1fff-11ea-a994-fa163e34d433,ResourceVersion:15012150,Generation:0,CreationTimestamp:2019-12-16 12:25:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 273a3013-1fff-11ea-a994-fa163e34d433 0xc001f9226f 0xc001f92280}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gx8nf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gx8nf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-gx8nf true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f923c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f92460}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:25:32 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 12:25:32.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-6zt4b" for this suite. Dec 16 12:25:41.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 12:25:41.234: INFO: namespace: e2e-tests-deployment-6zt4b, resource: bindings, ignored listing per whitelist Dec 16 12:25:41.238: INFO: namespace e2e-tests-deployment-6zt4b deletion completed in 8.306896324s • [SLOW TEST:21.963 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 12:25:41.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-6fv6x STEP: creating a selector STEP: Creating the service pods in kubernetes Dec 16 12:25:41.546: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Dec 16 12:26:11.823: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-6fv6x PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 16 12:26:11.823: INFO: >>> kubeConfig: /root/.kube/config Dec 16 12:26:12.401: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 12:26:12.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-6fv6x" for this suite. Dec 16 12:26:36.489: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 12:26:36.778: INFO: namespace: e2e-tests-pod-network-test-6fv6x, resource: bindings, ignored listing per whitelist Dec 16 12:26:36.787: INFO: namespace e2e-tests-pod-network-test-6fv6x deletion completed in 24.368807533s • [SLOW TEST:55.549 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 12:26:36.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 16 12:26:37.156: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4dc5db12-1fff-11ea-9388-0242ac110004" in namespace "e2e-tests-projected-nb9rm" to be "success or failure" Dec 16 12:26:37.224: INFO: Pod "downwardapi-volume-4dc5db12-1fff-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 68.187273ms Dec 16 12:26:39.523: INFO: Pod "downwardapi-volume-4dc5db12-1fff-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.366807774s Dec 16 12:26:41.543: INFO: Pod "downwardapi-volume-4dc5db12-1fff-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.386244371s Dec 16 12:26:43.746: INFO: Pod "downwardapi-volume-4dc5db12-1fff-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.589613945s Dec 16 12:26:45.767: INFO: Pod "downwardapi-volume-4dc5db12-1fff-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.610641477s Dec 16 12:26:48.019: INFO: Pod "downwardapi-volume-4dc5db12-1fff-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.862266858s Dec 16 12:26:50.039: INFO: Pod "downwardapi-volume-4dc5db12-1fff-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.882855635s STEP: Saw pod success Dec 16 12:26:50.039: INFO: Pod "downwardapi-volume-4dc5db12-1fff-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 12:26:50.046: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-4dc5db12-1fff-11ea-9388-0242ac110004 container client-container: STEP: delete the pod Dec 16 12:26:50.300: INFO: Waiting for pod downwardapi-volume-4dc5db12-1fff-11ea-9388-0242ac110004 to disappear Dec 16 12:26:50.335: INFO: Pod downwardapi-volume-4dc5db12-1fff-11ea-9388-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 12:26:50.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-nb9rm" for this suite. Dec 16 12:26:56.486: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 12:26:56.663: INFO: namespace: e2e-tests-projected-nb9rm, resource: bindings, ignored listing per whitelist Dec 16 12:26:56.697: INFO: namespace e2e-tests-projected-nb9rm deletion completed in 6.319555295s • [SLOW TEST:19.909 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 12:26:56.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-bfrfv/secret-test-59a14bfd-1fff-11ea-9388-0242ac110004 STEP: Creating a pod to test consume secrets Dec 16 12:26:57.015: INFO: Waiting up to 5m0s for pod "pod-configmaps-59a2d011-1fff-11ea-9388-0242ac110004" in namespace "e2e-tests-secrets-bfrfv" to be "success or failure" Dec 16 12:26:57.042: INFO: Pod "pod-configmaps-59a2d011-1fff-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 26.898137ms Dec 16 12:26:59.059: INFO: Pod "pod-configmaps-59a2d011-1fff-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043404818s Dec 16 12:27:01.082: INFO: Pod "pod-configmaps-59a2d011-1fff-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066223187s Dec 16 12:27:04.275: INFO: Pod "pod-configmaps-59a2d011-1fff-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.259600229s Dec 16 12:27:06.289: INFO: Pod "pod-configmaps-59a2d011-1fff-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.273021186s Dec 16 12:27:08.523: INFO: Pod "pod-configmaps-59a2d011-1fff-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.507774394s STEP: Saw pod success Dec 16 12:27:08.524: INFO: Pod "pod-configmaps-59a2d011-1fff-11ea-9388-0242ac110004" satisfied condition "success or failure" Dec 16 12:27:08.676: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-59a2d011-1fff-11ea-9388-0242ac110004 container env-test: STEP: delete the pod Dec 16 12:27:08.896: INFO: Waiting for pod pod-configmaps-59a2d011-1fff-11ea-9388-0242ac110004 to disappear Dec 16 12:27:08.900: INFO: Pod pod-configmaps-59a2d011-1fff-11ea-9388-0242ac110004 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 16 12:27:08.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-bfrfv" for this suite. Dec 16 12:27:14.926: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 16 12:27:15.092: INFO: namespace: e2e-tests-secrets-bfrfv, resource: bindings, ignored listing per whitelist Dec 16 12:27:15.101: INFO: namespace e2e-tests-secrets-bfrfv deletion completed in 6.198068337s • [SLOW TEST:18.404 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 16 12:27:15.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 16 12:27:15.298: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/:
alternatives.log
alternatives.l... (200; 14.669167ms)
Dec 16 12:27:15.305: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.150658ms)
Dec 16 12:27:15.369: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 63.499456ms)
Dec 16 12:27:15.375: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.635825ms)
Dec 16 12:27:15.380: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.221982ms)
Dec 16 12:27:15.386: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.715027ms)
Dec 16 12:27:15.391: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.103671ms)
Dec 16 12:27:15.396: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.437099ms)
Dec 16 12:27:15.402: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.210412ms)
Dec 16 12:27:15.407: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.604884ms)
Dec 16 12:27:15.412: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.379995ms)
Dec 16 12:27:15.416: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.79858ms)
Dec 16 12:27:15.421: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.666738ms)
Dec 16 12:27:15.425: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.270671ms)
Dec 16 12:27:15.431: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.329217ms)
Dec 16 12:27:15.446: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.017952ms)
Dec 16 12:27:15.457: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.564314ms)
Dec 16 12:27:15.464: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.41495ms)
Dec 16 12:27:15.468: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.561127ms)
Dec 16 12:27:15.473: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.757873ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 12:27:15.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-9cwlq" for this suite.
Dec 16 12:27:21.531: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 12:27:21.705: INFO: namespace: e2e-tests-proxy-9cwlq, resource: bindings, ignored listing per whitelist
Dec 16 12:27:21.779: INFO: namespace e2e-tests-proxy-9cwlq deletion completed in 6.301410806s

• [SLOW TEST:6.677 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 12:27:21.780: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Dec 16 12:27:22.196: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-kfvq9,SelfLink:/api/v1/namespaces/e2e-tests-watch-kfvq9/configmaps/e2e-watch-test-label-changed,UID:68ac0a56-1fff-11ea-a994-fa163e34d433,ResourceVersion:15012419,Generation:0,CreationTimestamp:2019-12-16 12:27:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 16 12:27:22.197: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-kfvq9,SelfLink:/api/v1/namespaces/e2e-tests-watch-kfvq9/configmaps/e2e-watch-test-label-changed,UID:68ac0a56-1fff-11ea-a994-fa163e34d433,ResourceVersion:15012420,Generation:0,CreationTimestamp:2019-12-16 12:27:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Dec 16 12:27:22.197: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-kfvq9,SelfLink:/api/v1/namespaces/e2e-tests-watch-kfvq9/configmaps/e2e-watch-test-label-changed,UID:68ac0a56-1fff-11ea-a994-fa163e34d433,ResourceVersion:15012421,Generation:0,CreationTimestamp:2019-12-16 12:27:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Dec 16 12:27:32.270: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-kfvq9,SelfLink:/api/v1/namespaces/e2e-tests-watch-kfvq9/configmaps/e2e-watch-test-label-changed,UID:68ac0a56-1fff-11ea-a994-fa163e34d433,ResourceVersion:15012435,Generation:0,CreationTimestamp:2019-12-16 12:27:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 16 12:27:32.271: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-kfvq9,SelfLink:/api/v1/namespaces/e2e-tests-watch-kfvq9/configmaps/e2e-watch-test-label-changed,UID:68ac0a56-1fff-11ea-a994-fa163e34d433,ResourceVersion:15012436,Generation:0,CreationTimestamp:2019-12-16 12:27:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Dec 16 12:27:32.271: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-kfvq9,SelfLink:/api/v1/namespaces/e2e-tests-watch-kfvq9/configmaps/e2e-watch-test-label-changed,UID:68ac0a56-1fff-11ea-a994-fa163e34d433,ResourceVersion:15012437,Generation:0,CreationTimestamp:2019-12-16 12:27:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 12:27:32.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-kfvq9" for this suite.
Dec 16 12:27:38.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 12:27:38.554: INFO: namespace: e2e-tests-watch-kfvq9, resource: bindings, ignored listing per whitelist
Dec 16 12:27:38.689: INFO: namespace e2e-tests-watch-kfvq9 deletion completed in 6.401987672s

• [SLOW TEST:16.910 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 12:27:38.691: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Dec 16 12:27:38.920: INFO: Waiting up to 5m0s for pod "pod-72a72ca7-1fff-11ea-9388-0242ac110004" in namespace "e2e-tests-emptydir-2g96f" to be "success or failure"
Dec 16 12:27:39.101: INFO: Pod "pod-72a72ca7-1fff-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 181.169324ms
Dec 16 12:27:41.114: INFO: Pod "pod-72a72ca7-1fff-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193824732s
Dec 16 12:27:43.154: INFO: Pod "pod-72a72ca7-1fff-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.233503867s
Dec 16 12:27:46.607: INFO: Pod "pod-72a72ca7-1fff-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.686969287s
Dec 16 12:27:48.629: INFO: Pod "pod-72a72ca7-1fff-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.709247358s
Dec 16 12:27:50.684: INFO: Pod "pod-72a72ca7-1fff-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.7637952s
STEP: Saw pod success
Dec 16 12:27:50.684: INFO: Pod "pod-72a72ca7-1fff-11ea-9388-0242ac110004" satisfied condition "success or failure"
Dec 16 12:27:50.693: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-72a72ca7-1fff-11ea-9388-0242ac110004 container test-container: 
STEP: delete the pod
Dec 16 12:27:50.915: INFO: Waiting for pod pod-72a72ca7-1fff-11ea-9388-0242ac110004 to disappear
Dec 16 12:27:50.940: INFO: Pod pod-72a72ca7-1fff-11ea-9388-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 12:27:50.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-2g96f" for this suite.
Dec 16 12:27:59.118: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 12:27:59.298: INFO: namespace: e2e-tests-emptydir-2g96f, resource: bindings, ignored listing per whitelist
Dec 16 12:27:59.310: INFO: namespace e2e-tests-emptydir-2g96f deletion completed in 8.338735205s

• [SLOW TEST:20.620 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 12:27:59.311: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 16 12:27:59.495: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7eeb5178-1fff-11ea-9388-0242ac110004" in namespace "e2e-tests-projected-mzn42" to be "success or failure"
Dec 16 12:27:59.670: INFO: Pod "downwardapi-volume-7eeb5178-1fff-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 175.15297ms
Dec 16 12:28:01.688: INFO: Pod "downwardapi-volume-7eeb5178-1fff-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.192748983s
Dec 16 12:28:03.712: INFO: Pod "downwardapi-volume-7eeb5178-1fff-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.216687873s
Dec 16 12:28:06.610: INFO: Pod "downwardapi-volume-7eeb5178-1fff-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.114833351s
Dec 16 12:28:08.639: INFO: Pod "downwardapi-volume-7eeb5178-1fff-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.143892323s
Dec 16 12:28:10.671: INFO: Pod "downwardapi-volume-7eeb5178-1fff-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.175652579s
Dec 16 12:28:12.698: INFO: Pod "downwardapi-volume-7eeb5178-1fff-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.203287757s
STEP: Saw pod success
Dec 16 12:28:12.699: INFO: Pod "downwardapi-volume-7eeb5178-1fff-11ea-9388-0242ac110004" satisfied condition "success or failure"
Dec 16 12:28:12.712: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-7eeb5178-1fff-11ea-9388-0242ac110004 container client-container: 
STEP: delete the pod
Dec 16 12:28:12.856: INFO: Waiting for pod downwardapi-volume-7eeb5178-1fff-11ea-9388-0242ac110004 to disappear
Dec 16 12:28:12.985: INFO: Pod downwardapi-volume-7eeb5178-1fff-11ea-9388-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 12:28:12.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-mzn42" for this suite.
Dec 16 12:28:21.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 12:28:21.158: INFO: namespace: e2e-tests-projected-mzn42, resource: bindings, ignored listing per whitelist
Dec 16 12:28:21.250: INFO: namespace e2e-tests-projected-mzn42 deletion completed in 8.239988645s

• [SLOW TEST:21.939 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 12:28:21.250: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-8c0bad21-1fff-11ea-9388-0242ac110004
STEP: Creating a pod to test consume configMaps
Dec 16 12:28:21.520: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8c0dda86-1fff-11ea-9388-0242ac110004" in namespace "e2e-tests-projected-dvwn2" to be "success or failure"
Dec 16 12:28:21.535: INFO: Pod "pod-projected-configmaps-8c0dda86-1fff-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 15.341993ms
Dec 16 12:28:23.656: INFO: Pod "pod-projected-configmaps-8c0dda86-1fff-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135857624s
Dec 16 12:28:25.676: INFO: Pod "pod-projected-configmaps-8c0dda86-1fff-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.156281379s
Dec 16 12:28:28.447: INFO: Pod "pod-projected-configmaps-8c0dda86-1fff-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.927227609s
Dec 16 12:28:30.482: INFO: Pod "pod-projected-configmaps-8c0dda86-1fff-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.961771802s
Dec 16 12:28:32.614: INFO: Pod "pod-projected-configmaps-8c0dda86-1fff-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.094279081s
STEP: Saw pod success
Dec 16 12:28:32.615: INFO: Pod "pod-projected-configmaps-8c0dda86-1fff-11ea-9388-0242ac110004" satisfied condition "success or failure"
Dec 16 12:28:32.641: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-8c0dda86-1fff-11ea-9388-0242ac110004 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 16 12:28:33.334: INFO: Waiting for pod pod-projected-configmaps-8c0dda86-1fff-11ea-9388-0242ac110004 to disappear
Dec 16 12:28:33.346: INFO: Pod pod-projected-configmaps-8c0dda86-1fff-11ea-9388-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 12:28:33.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-dvwn2" for this suite.
Dec 16 12:28:41.490: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 12:28:41.526: INFO: namespace: e2e-tests-projected-dvwn2, resource: bindings, ignored listing per whitelist
Dec 16 12:28:41.644: INFO: namespace e2e-tests-projected-dvwn2 deletion completed in 8.289892328s

• [SLOW TEST:20.394 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 12:28:41.645: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 16 12:28:41.994: INFO: Waiting up to 5m0s for pod "downward-api-982bc944-1fff-11ea-9388-0242ac110004" in namespace "e2e-tests-downward-api-629lb" to be "success or failure"
Dec 16 12:28:42.030: INFO: Pod "downward-api-982bc944-1fff-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 35.110382ms
Dec 16 12:28:44.109: INFO: Pod "downward-api-982bc944-1fff-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114084968s
Dec 16 12:28:46.140: INFO: Pod "downward-api-982bc944-1fff-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.145612678s
Dec 16 12:28:48.461: INFO: Pod "downward-api-982bc944-1fff-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.466799541s
Dec 16 12:28:50.486: INFO: Pod "downward-api-982bc944-1fff-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.490955561s
Dec 16 12:28:52.991: INFO: Pod "downward-api-982bc944-1fff-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.99629542s
Dec 16 12:28:55.113: INFO: Pod "downward-api-982bc944-1fff-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.118456082s
STEP: Saw pod success
Dec 16 12:28:55.113: INFO: Pod "downward-api-982bc944-1fff-11ea-9388-0242ac110004" satisfied condition "success or failure"
Dec 16 12:28:55.485: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-982bc944-1fff-11ea-9388-0242ac110004 container dapi-container: 
STEP: delete the pod
Dec 16 12:28:55.674: INFO: Waiting for pod downward-api-982bc944-1fff-11ea-9388-0242ac110004 to disappear
Dec 16 12:28:55.689: INFO: Pod downward-api-982bc944-1fff-11ea-9388-0242ac110004 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 12:28:55.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-629lb" for this suite.
Dec 16 12:29:01.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 12:29:01.930: INFO: namespace: e2e-tests-downward-api-629lb, resource: bindings, ignored listing per whitelist
Dec 16 12:29:01.986: INFO: namespace e2e-tests-downward-api-629lb deletion completed in 6.285869499s

• [SLOW TEST:20.342 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 12:29:01.987: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 16 12:29:02.299: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.589716ms)
Dec 16 12:29:02.308: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.353304ms)
Dec 16 12:29:02.317: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.089803ms)
Dec 16 12:29:02.325: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.34994ms)
Dec 16 12:29:02.331: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.529702ms)
Dec 16 12:29:02.336: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.788081ms)
Dec 16 12:29:02.340: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.501711ms)
Dec 16 12:29:02.346: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.467816ms)
Dec 16 12:29:02.353: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.814834ms)
Dec 16 12:29:02.357: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.393589ms)
Dec 16 12:29:02.361: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.184261ms)
Dec 16 12:29:02.366: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.747753ms)
Dec 16 12:29:02.370: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.002362ms)
Dec 16 12:29:02.374: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.800688ms)
Dec 16 12:29:02.379: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.752991ms)
Dec 16 12:29:02.388: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.447869ms)
Dec 16 12:29:02.392: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.179559ms)
Dec 16 12:29:02.397: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.104503ms)
Dec 16 12:29:02.401: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.156449ms)
Dec 16 12:29:02.406: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.720999ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 12:29:02.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-2pszk" for this suite.
Dec 16 12:29:08.547: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 12:29:09.676: INFO: namespace: e2e-tests-proxy-2pszk, resource: bindings, ignored listing per whitelist
Dec 16 12:29:09.742: INFO: namespace e2e-tests-proxy-2pszk deletion completed in 7.330728214s

• [SLOW TEST:7.756 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 12:29:09.743: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Creating an uninitialized pod in the namespace
Dec 16 12:29:20.260: INFO: error from create uninitialized namespace: 
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 12:29:45.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-frpbr" for this suite.
Dec 16 12:29:52.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 12:29:52.203: INFO: namespace: e2e-tests-namespaces-frpbr, resource: bindings, ignored listing per whitelist
Dec 16 12:29:52.245: INFO: namespace e2e-tests-namespaces-frpbr deletion completed in 6.274831889s
STEP: Destroying namespace "e2e-tests-nsdeletetest-8wblv" for this suite.
Dec 16 12:29:52.249: INFO: Namespace e2e-tests-nsdeletetest-8wblv was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-nm9hn" for this suite.
Dec 16 12:29:58.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 12:29:58.447: INFO: namespace: e2e-tests-nsdeletetest-nm9hn, resource: bindings, ignored listing per whitelist
Dec 16 12:29:58.447: INFO: namespace e2e-tests-nsdeletetest-nm9hn deletion completed in 6.197746779s

• [SLOW TEST:48.704 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 12:29:58.448: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Dec 16 12:29:58.723: INFO: Waiting up to 5m0s for pod "pod-c5fd6f16-1fff-11ea-9388-0242ac110004" in namespace "e2e-tests-emptydir-6gmgb" to be "success or failure"
Dec 16 12:29:58.837: INFO: Pod "pod-c5fd6f16-1fff-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 113.597179ms
Dec 16 12:30:00.878: INFO: Pod "pod-c5fd6f16-1fff-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.155030036s
Dec 16 12:30:02.910: INFO: Pod "pod-c5fd6f16-1fff-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.186726211s
Dec 16 12:30:05.222: INFO: Pod "pod-c5fd6f16-1fff-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.498485728s
Dec 16 12:30:07.236: INFO: Pod "pod-c5fd6f16-1fff-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.513102124s
Dec 16 12:30:09.311: INFO: Pod "pod-c5fd6f16-1fff-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.588105778s
STEP: Saw pod success
Dec 16 12:30:09.312: INFO: Pod "pod-c5fd6f16-1fff-11ea-9388-0242ac110004" satisfied condition "success or failure"
Dec 16 12:30:09.326: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-c5fd6f16-1fff-11ea-9388-0242ac110004 container test-container: 
STEP: delete the pod
Dec 16 12:30:09.471: INFO: Waiting for pod pod-c5fd6f16-1fff-11ea-9388-0242ac110004 to disappear
Dec 16 12:30:09.484: INFO: Pod pod-c5fd6f16-1fff-11ea-9388-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 12:30:09.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-6gmgb" for this suite.
Dec 16 12:30:15.537: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 12:30:15.654: INFO: namespace: e2e-tests-emptydir-6gmgb, resource: bindings, ignored listing per whitelist
Dec 16 12:30:15.684: INFO: namespace e2e-tests-emptydir-6gmgb deletion completed in 6.190169888s

• [SLOW TEST:17.235 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 12:30:15.684: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 16 12:30:15.945: INFO: Waiting up to 5m0s for pod "pod-d040fff7-1fff-11ea-9388-0242ac110004" in namespace "e2e-tests-emptydir-vzbtv" to be "success or failure"
Dec 16 12:30:16.026: INFO: Pod "pod-d040fff7-1fff-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 80.861157ms
Dec 16 12:30:18.127: INFO: Pod "pod-d040fff7-1fff-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.182303224s
Dec 16 12:30:20.146: INFO: Pod "pod-d040fff7-1fff-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.201791709s
Dec 16 12:30:22.785: INFO: Pod "pod-d040fff7-1fff-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.840692602s
Dec 16 12:30:24.807: INFO: Pod "pod-d040fff7-1fff-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.862159084s
Dec 16 12:30:26.910: INFO: Pod "pod-d040fff7-1fff-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.964896246s
Dec 16 12:30:28.926: INFO: Pod "pod-d040fff7-1fff-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.980919809s
STEP: Saw pod success
Dec 16 12:30:28.926: INFO: Pod "pod-d040fff7-1fff-11ea-9388-0242ac110004" satisfied condition "success or failure"
Dec 16 12:30:28.930: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-d040fff7-1fff-11ea-9388-0242ac110004 container test-container: 
STEP: delete the pod
Dec 16 12:30:29.956: INFO: Waiting for pod pod-d040fff7-1fff-11ea-9388-0242ac110004 to disappear
Dec 16 12:30:30.208: INFO: Pod pod-d040fff7-1fff-11ea-9388-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 12:30:30.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-vzbtv" for this suite.
Dec 16 12:30:36.305: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 12:30:36.518: INFO: namespace: e2e-tests-emptydir-vzbtv, resource: bindings, ignored listing per whitelist
Dec 16 12:30:36.678: INFO: namespace e2e-tests-emptydir-vzbtv deletion completed in 6.456989132s

• [SLOW TEST:20.995 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 12:30:36.679: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-zpdpp
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-zpdpp
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-zpdpp
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-zpdpp
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-zpdpp
Dec 16 12:30:50.971: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-zpdpp, name: ss-0, uid: e3989de8-1fff-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete.
Dec 16 12:30:52.487: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-zpdpp, name: ss-0, uid: e3989de8-1fff-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Dec 16 12:30:52.613: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-zpdpp, name: ss-0, uid: e3989de8-1fff-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Dec 16 12:30:52.639: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-zpdpp
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-zpdpp
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-zpdpp and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 16 12:31:05.425: INFO: Deleting all statefulset in ns e2e-tests-statefulset-zpdpp
Dec 16 12:31:05.432: INFO: Scaling statefulset ss to 0
Dec 16 12:31:15.504: INFO: Waiting for statefulset status.replicas updated to 0
Dec 16 12:31:15.514: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 12:31:15.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-zpdpp" for this suite.
Dec 16 12:31:23.669: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 12:31:23.762: INFO: namespace: e2e-tests-statefulset-zpdpp, resource: bindings, ignored listing per whitelist
Dec 16 12:31:23.930: INFO: namespace e2e-tests-statefulset-zpdpp deletion completed in 8.350559272s

• [SLOW TEST:47.252 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 12:31:23.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-cblf
STEP: Creating a pod to test atomic-volume-subpath
Dec 16 12:31:24.262: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-cblf" in namespace "e2e-tests-subpath-slzj9" to be "success or failure"
Dec 16 12:31:24.359: INFO: Pod "pod-subpath-test-downwardapi-cblf": Phase="Pending", Reason="", readiness=false. Elapsed: 97.210928ms
Dec 16 12:31:26.377: INFO: Pod "pod-subpath-test-downwardapi-cblf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115154297s
Dec 16 12:31:28.410: INFO: Pod "pod-subpath-test-downwardapi-cblf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.14759637s
Dec 16 12:31:30.710: INFO: Pod "pod-subpath-test-downwardapi-cblf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.447548149s
Dec 16 12:31:32.740: INFO: Pod "pod-subpath-test-downwardapi-cblf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.477611805s
Dec 16 12:31:34.756: INFO: Pod "pod-subpath-test-downwardapi-cblf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.493842732s
Dec 16 12:31:36.830: INFO: Pod "pod-subpath-test-downwardapi-cblf": Phase="Pending", Reason="", readiness=false. Elapsed: 12.567931409s
Dec 16 12:31:38.860: INFO: Pod "pod-subpath-test-downwardapi-cblf": Phase="Pending", Reason="", readiness=false. Elapsed: 14.5982619s
Dec 16 12:31:40.874: INFO: Pod "pod-subpath-test-downwardapi-cblf": Phase="Pending", Reason="", readiness=false. Elapsed: 16.611726476s
Dec 16 12:31:42.890: INFO: Pod "pod-subpath-test-downwardapi-cblf": Phase="Running", Reason="", readiness=false. Elapsed: 18.628035147s
Dec 16 12:31:44.909: INFO: Pod "pod-subpath-test-downwardapi-cblf": Phase="Running", Reason="", readiness=false. Elapsed: 20.646986316s
Dec 16 12:31:46.918: INFO: Pod "pod-subpath-test-downwardapi-cblf": Phase="Running", Reason="", readiness=false. Elapsed: 22.655913653s
Dec 16 12:31:48.971: INFO: Pod "pod-subpath-test-downwardapi-cblf": Phase="Running", Reason="", readiness=false. Elapsed: 24.709082519s
Dec 16 12:31:51.035: INFO: Pod "pod-subpath-test-downwardapi-cblf": Phase="Running", Reason="", readiness=false. Elapsed: 26.772858118s
Dec 16 12:31:53.056: INFO: Pod "pod-subpath-test-downwardapi-cblf": Phase="Running", Reason="", readiness=false. Elapsed: 28.793798046s
Dec 16 12:31:55.071: INFO: Pod "pod-subpath-test-downwardapi-cblf": Phase="Running", Reason="", readiness=false. Elapsed: 30.809101851s
Dec 16 12:31:57.084: INFO: Pod "pod-subpath-test-downwardapi-cblf": Phase="Running", Reason="", readiness=false. Elapsed: 32.822257557s
Dec 16 12:31:59.377: INFO: Pod "pod-subpath-test-downwardapi-cblf": Phase="Running", Reason="", readiness=false. Elapsed: 35.114603831s
Dec 16 12:32:01.390: INFO: Pod "pod-subpath-test-downwardapi-cblf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.128357337s
STEP: Saw pod success
Dec 16 12:32:01.391: INFO: Pod "pod-subpath-test-downwardapi-cblf" satisfied condition "success or failure"
Dec 16 12:32:01.399: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-cblf container test-container-subpath-downwardapi-cblf: 
STEP: delete the pod
Dec 16 12:32:01.503: INFO: Waiting for pod pod-subpath-test-downwardapi-cblf to disappear
Dec 16 12:32:01.510: INFO: Pod pod-subpath-test-downwardapi-cblf no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-cblf
Dec 16 12:32:01.510: INFO: Deleting pod "pod-subpath-test-downwardapi-cblf" in namespace "e2e-tests-subpath-slzj9"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 12:32:01.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-slzj9" for this suite.
Dec 16 12:32:07.684: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 12:32:07.918: INFO: namespace: e2e-tests-subpath-slzj9, resource: bindings, ignored listing per whitelist
Dec 16 12:32:07.971: INFO: namespace e2e-tests-subpath-slzj9 deletion completed in 6.4506989s

• [SLOW TEST:44.040 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 12:32:07.972: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 16 12:32:08.191: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 12:32:18.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-prcgq" for this suite.
Dec 16 12:33:04.382: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 12:33:04.512: INFO: namespace: e2e-tests-pods-prcgq, resource: bindings, ignored listing per whitelist
Dec 16 12:33:04.656: INFO: namespace e2e-tests-pods-prcgq deletion completed in 46.324160891s

• [SLOW TEST:56.685 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 12:33:04.657: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-svf8n
Dec 16 12:33:14.972: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-svf8n
STEP: checking the pod's current state and verifying that restartCount is present
Dec 16 12:33:14.978: INFO: Initial restart count of pod liveness-exec is 0
Dec 16 12:34:06.241: INFO: Restart count of pod e2e-tests-container-probe-svf8n/liveness-exec is now 1 (51.263483681s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 12:34:06.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-svf8n" for this suite.
Dec 16 12:34:14.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 12:34:14.683: INFO: namespace: e2e-tests-container-probe-svf8n, resource: bindings, ignored listing per whitelist
Dec 16 12:34:14.718: INFO: namespace e2e-tests-container-probe-svf8n deletion completed in 8.304086795s

• [SLOW TEST:70.062 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 12:34:14.720: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 12:34:27.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-2k7js" for this suite.
Dec 16 12:34:33.564: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 12:34:33.690: INFO: namespace: e2e-tests-kubelet-test-2k7js, resource: bindings, ignored listing per whitelist
Dec 16 12:34:33.781: INFO: namespace e2e-tests-kubelet-test-2k7js deletion completed in 6.262994387s

• [SLOW TEST:19.062 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 12:34:33.782: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-6a438a6a-2000-11ea-9388-0242ac110004
STEP: Creating secret with name s-test-opt-upd-6a438ba0-2000-11ea-9388-0242ac110004
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-6a438a6a-2000-11ea-9388-0242ac110004
STEP: Updating secret s-test-opt-upd-6a438ba0-2000-11ea-9388-0242ac110004
STEP: Creating secret with name s-test-opt-create-6a438bca-2000-11ea-9388-0242ac110004
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 12:34:54.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-7dx26" for this suite.
Dec 16 12:35:21.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 12:35:21.157: INFO: namespace: e2e-tests-secrets-7dx26, resource: bindings, ignored listing per whitelist
Dec 16 12:35:21.157: INFO: namespace e2e-tests-secrets-7dx26 deletion completed in 26.177064717s

• [SLOW TEST:47.375 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 12:35:21.158: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Dec 16 12:35:32.061: INFO: Successfully updated pod "annotationupdate86448b3f-2000-11ea-9388-0242ac110004"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 12:35:34.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-g7fb2" for this suite.
Dec 16 12:35:58.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 12:35:58.659: INFO: namespace: e2e-tests-downward-api-g7fb2, resource: bindings, ignored listing per whitelist
Dec 16 12:35:58.686: INFO: namespace e2e-tests-downward-api-g7fb2 deletion completed in 24.355341017s

• [SLOW TEST:37.528 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 12:35:58.687: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-4g478
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-4g478
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-4g478
Dec 16 12:35:59.077: INFO: Found 0 stateful pods, waiting for 1
Dec 16 12:36:09.121: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Dec 16 12:36:19.094: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Dec 16 12:36:19.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 16 12:36:20.004: INFO: stderr: ""
Dec 16 12:36:20.005: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 16 12:36:20.005: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 16 12:36:20.087: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 16 12:36:20.087: INFO: Waiting for statefulset status.replicas updated to 0
Dec 16 12:36:20.147: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 16 12:36:20.148: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:35:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:35:59 +0000 UTC  }]
Dec 16 12:36:20.148: INFO: 
Dec 16 12:36:20.148: INFO: StatefulSet ss has not reached scale 3, at 1
Dec 16 12:36:22.674: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.990053186s
Dec 16 12:36:23.939: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.463420327s
Dec 16 12:36:24.954: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.199223961s
Dec 16 12:36:26.060: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.183881106s
Dec 16 12:36:28.319: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.07791402s
Dec 16 12:36:30.006: INFO: Verifying statefulset ss doesn't scale past 3 for another 818.364979ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-4g478
Dec 16 12:36:31.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 12:36:33.274: INFO: stderr: ""
Dec 16 12:36:33.274: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 16 12:36:33.274: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 16 12:36:33.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 12:36:33.583: INFO: rc: 1
Dec 16 12:36:33.584: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc001f34de0 exit status 1   true [0xc0003cd8a0 0xc0003cd8c8 0xc0003cd8f0] [0xc0003cd8a0 0xc0003cd8c8 0xc0003cd8f0] [0xc0003cd8c0 0xc0003cd8e8] [0x935700 0x935700] 0xc0013e2240 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Dec 16 12:36:43.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 12:36:44.596: INFO: stderr: "mv: can't rename '/tmp/index.html': No such file or directory\n"
Dec 16 12:36:44.596: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 16 12:36:44.596: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 16 12:36:44.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 12:36:45.181: INFO: stderr: "mv: can't rename '/tmp/index.html': No such file or directory\n"
Dec 16 12:36:45.181: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 16 12:36:45.182: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 16 12:36:45.199: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 16 12:36:45.199: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 16 12:36:45.199: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Dec 16 12:36:45.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 16 12:36:45.639: INFO: stderr: ""
Dec 16 12:36:45.640: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 16 12:36:45.640: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 16 12:36:45.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 16 12:36:46.345: INFO: stderr: ""
Dec 16 12:36:46.345: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 16 12:36:46.345: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 16 12:36:46.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 16 12:36:46.969: INFO: stderr: ""
Dec 16 12:36:46.969: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 16 12:36:46.969: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 16 12:36:46.969: INFO: Waiting for statefulset status.replicas updated to 0
Dec 16 12:36:47.005: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Dec 16 12:36:57.032: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 16 12:36:57.032: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Dec 16 12:36:57.032: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Dec 16 12:36:57.083: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 16 12:36:57.083: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:35:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:35:59 +0000 UTC  }]
Dec 16 12:36:57.083: INFO: ss-1  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:20 +0000 UTC  }]
Dec 16 12:36:57.083: INFO: ss-2  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:20 +0000 UTC  }]
Dec 16 12:36:57.083: INFO: 
Dec 16 12:36:57.083: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 16 12:36:58.119: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 16 12:36:58.120: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:35:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:35:59 +0000 UTC  }]
Dec 16 12:36:58.120: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:20 +0000 UTC  }]
Dec 16 12:36:58.120: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:20 +0000 UTC  }]
Dec 16 12:36:58.120: INFO: 
Dec 16 12:36:58.120: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 16 12:36:59.632: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 16 12:36:59.632: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:35:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:35:59 +0000 UTC  }]
Dec 16 12:36:59.632: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:20 +0000 UTC  }]
Dec 16 12:36:59.632: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:20 +0000 UTC  }]
Dec 16 12:36:59.632: INFO: 
Dec 16 12:36:59.632: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 16 12:37:00.664: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 16 12:37:00.664: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:35:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:35:59 +0000 UTC  }]
Dec 16 12:37:00.665: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:20 +0000 UTC  }]
Dec 16 12:37:00.665: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:20 +0000 UTC  }]
Dec 16 12:37:00.665: INFO: 
Dec 16 12:37:00.665: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 16 12:37:02.595: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 16 12:37:02.595: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:35:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:35:59 +0000 UTC  }]
Dec 16 12:37:02.596: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:20 +0000 UTC  }]
Dec 16 12:37:02.596: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:20 +0000 UTC  }]
Dec 16 12:37:02.596: INFO: 
Dec 16 12:37:02.596: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 16 12:37:03.613: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 16 12:37:03.613: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:35:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:35:59 +0000 UTC  }]
Dec 16 12:37:03.613: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:20 +0000 UTC  }]
Dec 16 12:37:03.613: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:20 +0000 UTC  }]
Dec 16 12:37:03.613: INFO: 
Dec 16 12:37:03.613: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 16 12:37:04.666: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 16 12:37:04.667: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:35:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:35:59 +0000 UTC  }]
Dec 16 12:37:04.667: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:20 +0000 UTC  }]
Dec 16 12:37:04.667: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:20 +0000 UTC  }]
Dec 16 12:37:04.667: INFO: 
Dec 16 12:37:04.667: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 16 12:37:05.789: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 16 12:37:05.789: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:35:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:35:59 +0000 UTC  }]
Dec 16 12:37:05.789: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:20 +0000 UTC  }]
Dec 16 12:37:05.789: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:20 +0000 UTC  }]
Dec 16 12:37:05.789: INFO: 
Dec 16 12:37:05.789: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 16 12:37:06.799: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 16 12:37:06.799: INFO: ss-0  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:35:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:35:59 +0000 UTC  }]
Dec 16 12:37:06.800: INFO: ss-1  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:20 +0000 UTC  }]
Dec 16 12:37:06.800: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:36:20 +0000 UTC  }]
Dec 16 12:37:06.800: INFO: 
Dec 16 12:37:06.800: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-4g478
Dec 16 12:37:07.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 12:37:08.139: INFO: rc: 1
Dec 16 12:37:08.140: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc001c623f0 exit status 1   true [0xc0022ae480 0xc0022ae498 0xc0022ae4b0] [0xc0022ae480 0xc0022ae498 0xc0022ae4b0] [0xc0022ae490 0xc0022ae4a8] [0x935700 0x935700] 0xc001da4000 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Dec 16 12:37:18.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 12:37:18.340: INFO: rc: 1
Dec 16 12:37:18.341: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001f34120 exit status 1   true [0xc0000e8288 0xc0003cc250 0xc0003cc390] [0xc0000e8288 0xc0003cc250 0xc0003cc390] [0xc0003cc138 0xc0003cc328] [0x935700 0x935700] 0xc0026e8240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 16 12:37:28.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 12:37:28.581: INFO: rc: 1
Dec 16 12:37:28.582: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001580510 exit status 1   true [0xc0021fa000 0xc0021fa018 0xc0021fa030] [0xc0021fa000 0xc0021fa018 0xc0021fa030] [0xc0021fa010 0xc0021fa028] [0x935700 0x935700] 0xc002718300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 16 12:37:38.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 12:37:38.772: INFO: rc: 1
Dec 16 12:37:38.773: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002716120 exit status 1   true [0xc0022ae000 0xc0022ae018 0xc0022ae030] [0xc0022ae000 0xc0022ae018 0xc0022ae030] [0xc0022ae010 0xc0022ae028] [0x935700 0x935700] 0xc0024a5380 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 16 12:37:48.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 12:37:49.002: INFO: rc: 1
Dec 16 12:37:49.003: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000be0270 exit status 1   true [0xc00181c008 0xc00181c088 0xc00181c0f8] [0xc00181c008 0xc00181c088 0xc00181c0f8] [0xc00181c050 0xc00181c0e8] [0x935700 0x935700] 0xc00213e300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 16 12:37:59.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 12:37:59.134: INFO: rc: 1
Dec 16 12:37:59.135: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0015806f0 exit status 1   true [0xc0021fa038 0xc0021fa050 0xc0021fa070] [0xc0021fa038 0xc0021fa050 0xc0021fa070] [0xc0021fa048 0xc0021fa060] [0x935700 0x935700] 0xc002718720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 16 12:38:09.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 12:38:09.379: INFO: rc: 1
Dec 16 12:38:09.379: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001f34270 exit status 1   true [0xc0003cc410 0xc0003cc500 0xc0003cc638] [0xc0003cc410 0xc0003cc500 0xc0003cc638] [0xc0003cc4c0 0xc0003cc620] [0x935700 0x935700] 0xc0026e8540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 16 12:38:19.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 12:38:19.599: INFO: rc: 1
Dec 16 12:38:19.600: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001f34390 exit status 1   true [0xc0003cc6e0 0xc0003cc778 0xc0003cc828] [0xc0003cc6e0 0xc0003cc778 0xc0003cc828] [0xc0003cc730 0xc0003cc7e0] [0x935700 0x935700] 0xc0026e8840 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 16 12:38:29.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 12:38:29.809: INFO: rc: 1
Dec 16 12:38:29.809: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001f344e0 exit status 1   true [0xc0003cc868 0xc0003ccad0 0xc0003ccc60] [0xc0003cc868 0xc0003ccad0 0xc0003ccc60] [0xc0003cca20 0xc0003ccbf0] [0x935700 0x935700] 0xc0026e8b40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 16 12:38:39.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 12:38:40.629: INFO: rc: 1
Dec 16 12:38:40.630: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000be0570 exit status 1   true [0xc00181c100 0xc00181c168 0xc00181c1f0] [0xc00181c100 0xc00181c168 0xc00181c1f0] [0xc00181c160 0xc00181c190] [0x935700 0x935700] 0xc00213e600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 16 12:38:50.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 12:38:50.828: INFO: rc: 1
Dec 16 12:38:50.829: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002716270 exit status 1   true [0xc0022ae038 0xc0022ae050 0xc0022ae068] [0xc0022ae038 0xc0022ae050 0xc0022ae068] [0xc0022ae048 0xc0022ae060] [0x935700 0x935700] 0xc0024a5680 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 16 12:39:00.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 12:39:01.066: INFO: rc: 1
Dec 16 12:39:01.067: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000be0690 exit status 1   true [0xc00181c228 0xc00181c2c0 0xc00181c330] [0xc00181c228 0xc00181c2c0 0xc00181c330] [0xc00181c290 0xc00181c320] [0x935700 0x935700] 0xc00213e960 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 16 12:39:11.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 12:39:11.235: INFO: rc: 1
Dec 16 12:39:11.236: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000be0870 exit status 1   true [0xc00181c338 0xc00181c350 0xc00181c370] [0xc00181c338 0xc00181c350 0xc00181c370] [0xc00181c348 0xc00181c368] [0x935700 0x935700] 0xc002194c00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 16 12:39:21.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 12:39:21.422: INFO: rc: 1
Dec 16 12:39:21.424: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002716150 exit status 1   true [0xc00000e010 0xc0022ae010 0xc0022ae028] [0xc00000e010 0xc0022ae010 0xc0022ae028] [0xc0022ae008 0xc0022ae020] [0x935700 0x935700] 0xc00213e2a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 16 12:39:31.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 12:39:31.599: INFO: rc: 1
Dec 16 12:39:31.600: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000be03f0 exit status 1   true [0xc0021fa000 0xc0021fa018 0xc0021fa030] [0xc0021fa000 0xc0021fa018 0xc0021fa030] [0xc0021fa010 0xc0021fa028] [0x935700 0x935700] 0xc0024a5380 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 16 12:39:41.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 12:39:41.776: INFO: rc: 1
Dec 16 12:39:41.776: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001f34150 exit status 1   true [0xc00181c008 0xc00181c088 0xc00181c0f8] [0xc00181c008 0xc00181c088 0xc00181c0f8] [0xc00181c050 0xc00181c0e8] [0x935700 0x935700] 0xc002718300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 16 12:39:51.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 12:39:52.042: INFO: rc: 1
Dec 16 12:39:52.043: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0027162d0 exit status 1   true [0xc0022ae030 0xc0022ae048 0xc0022ae060] [0xc0022ae030 0xc0022ae048 0xc0022ae060] [0xc0022ae040 0xc0022ae058] [0x935700 0x935700] 0xc00213e5a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 16 12:40:02.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 12:40:02.229: INFO: rc: 1
Dec 16 12:40:02.229: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000be05d0 exit status 1   true [0xc0021fa038 0xc0021fa050 0xc0021fa070] [0xc0021fa038 0xc0021fa050 0xc0021fa070] [0xc0021fa048 0xc0021fa060] [0x935700 0x935700] 0xc0024a5680 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 16 12:40:12.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 12:40:12.401: INFO: rc: 1
Dec 16 12:40:12.401: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000be0750 exit status 1   true [0xc0021fa078 0xc0021fa090 0xc0021fa0a8] [0xc0021fa078 0xc0021fa090 0xc0021fa0a8] [0xc0021fa088 0xc0021fa0a0] [0x935700 0x935700] 0xc0024a5980 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 16 12:40:22.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 12:40:22.649: INFO: rc: 1
Dec 16 12:40:22.650: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001f34300 exit status 1   true [0xc00181c100 0xc00181c168 0xc00181c1f0] [0xc00181c100 0xc00181c168 0xc00181c1f0] [0xc00181c160 0xc00181c190] [0x935700 0x935700] 0xc002718720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 16 12:40:32.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 12:40:32.818: INFO: rc: 1
Dec 16 12:40:32.819: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002716780 exit status 1   true [0xc0022ae068 0xc0022ae080 0xc0022ae098] [0xc0022ae068 0xc0022ae080 0xc0022ae098] [0xc0022ae078 0xc0022ae090] [0x935700 0x935700] 0xc00213e900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 16 12:40:42.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 12:40:43.068: INFO: rc: 1
Dec 16 12:40:43.068: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001f34480 exit status 1   true [0xc00181c228 0xc00181c2c0 0xc00181c330] [0xc00181c228 0xc00181c2c0 0xc00181c330] [0xc00181c290 0xc00181c320] [0x935700 0x935700] 0xc002718cc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 16 12:40:53.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 12:40:53.461: INFO: rc: 1
Dec 16 12:40:53.462: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001f34630 exit status 1   true [0xc00181c338 0xc00181c350 0xc00181c370] [0xc00181c338 0xc00181c350 0xc00181c370] [0xc00181c348 0xc00181c368] [0x935700 0x935700] 0xc0027190e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 16 12:41:03.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 12:41:03.648: INFO: rc: 1
Dec 16 12:41:03.648: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001580660 exit status 1   true [0xc0003cc108 0xc0003cc308 0xc0003cc410] [0xc0003cc108 0xc0003cc308 0xc0003cc410] [0xc0003cc250 0xc0003cc390] [0x935700 0x935700] 0xc002194d80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 16 12:41:13.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 12:41:13.819: INFO: rc: 1
Dec 16 12:41:13.820: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0015807b0 exit status 1   true [0xc0003cc420 0xc0003cc588 0xc0003cc6e0] [0xc0003cc420 0xc0003cc588 0xc0003cc6e0] [0xc0003cc500 0xc0003cc638] [0x935700 0x935700] 0xc002195260 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 16 12:41:23.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 12:41:23.970: INFO: rc: 1
Dec 16 12:41:23.971: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002716120 exit status 1   true [0xc0000e8288 0xc0022ae010 0xc0022ae028] [0xc0000e8288 0xc0022ae010 0xc0022ae028] [0xc0022ae008 0xc0022ae020] [0x935700 0x935700] 0xc00213e0c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 16 12:41:33.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 12:41:34.143: INFO: rc: 1
Dec 16 12:41:34.144: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001f34120 exit status 1   true [0xc00181c008 0xc00181c088 0xc00181c0f8] [0xc00181c008 0xc00181c088 0xc00181c0f8] [0xc00181c050 0xc00181c0e8] [0x935700 0x935700] 0xc002194d80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 16 12:41:44.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 12:41:44.255: INFO: rc: 1
Dec 16 12:41:44.255: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001f342a0 exit status 1   true [0xc00181c100 0xc00181c168 0xc00181c1f0] [0xc00181c100 0xc00181c168 0xc00181c1f0] [0xc00181c160 0xc00181c190] [0x935700 0x935700] 0xc0021951a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 16 12:41:54.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 12:41:54.468: INFO: rc: 1
Dec 16 12:41:54.469: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001580510 exit status 1   true [0xc0003cc108 0xc0003cc308 0xc0003cc410] [0xc0003cc108 0xc0003cc308 0xc0003cc410] [0xc0003cc250 0xc0003cc390] [0x935700 0x935700] 0xc002718300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 16 12:42:04.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 12:42:04.630: INFO: rc: 1
Dec 16 12:42:04.631: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0015806f0 exit status 1   true [0xc0003cc420 0xc0003cc588 0xc0003cc6e0] [0xc0003cc420 0xc0003cc588 0xc0003cc6e0] [0xc0003cc500 0xc0003cc638] [0x935700 0x935700] 0xc002718720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 16 12:42:14.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4g478 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 16 12:42:14.789: INFO: rc: 1
Dec 16 12:42:14.790: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: 
Dec 16 12:42:14.790: INFO: Scaling statefulset ss to 0
Dec 16 12:42:14.808: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 16 12:42:14.810: INFO: Deleting all statefulset in ns e2e-tests-statefulset-4g478
Dec 16 12:42:14.813: INFO: Scaling statefulset ss to 0
Dec 16 12:42:14.823: INFO: Waiting for statefulset status.replicas updated to 0
Dec 16 12:42:14.825: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 12:42:14.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-4g478" for this suite.
Dec 16 12:42:23.028: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 12:42:23.151: INFO: namespace: e2e-tests-statefulset-4g478, resource: bindings, ignored listing per whitelist
Dec 16 12:42:23.185: INFO: namespace e2e-tests-statefulset-4g478 deletion completed in 8.218923115s

• [SLOW TEST:384.498 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 12:42:23.185: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Dec 16 12:42:35.464: INFO: Pod pod-hostip-81ddccca-2001-11ea-9388-0242ac110004 has hostIP: 10.96.1.240
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 12:42:35.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-vtz5z" for this suite.
Dec 16 12:42:59.567: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 12:42:59.650: INFO: namespace: e2e-tests-pods-vtz5z, resource: bindings, ignored listing per whitelist
Dec 16 12:42:59.765: INFO: namespace e2e-tests-pods-vtz5z deletion completed in 24.292792649s

• [SLOW TEST:36.580 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 12:42:59.765: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Dec 16 12:43:00.042: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 16 12:43:00.052: INFO: Waiting for terminating namespaces to be deleted...
Dec 16 12:43:00.056: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Dec 16 12:43:00.123: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Dec 16 12:43:00.123: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 16 12:43:00.123: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 16 12:43:00.123: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Dec 16 12:43:00.123: INFO: 	Container weave ready: true, restart count 0
Dec 16 12:43:00.123: INFO: 	Container weave-npc ready: true, restart count 0
Dec 16 12:43:00.123: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 16 12:43:00.123: INFO: 	Container coredns ready: true, restart count 0
Dec 16 12:43:00.123: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 16 12:43:00.123: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 16 12:43:00.123: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 16 12:43:00.123: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 16 12:43:00.123: INFO: 	Container coredns ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-server-hu5at5svl7ps
Dec 16 12:43:00.205: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Dec 16 12:43:00.205: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Dec 16 12:43:00.205: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Dec 16 12:43:00.205: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps
Dec 16 12:43:00.205: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps
Dec 16 12:43:00.205: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Dec 16 12:43:00.205: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Dec 16 12:43:00.205: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-97ccb1c4-2001-11ea-9388-0242ac110004.15e0da8fb28a8351], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-xbw2h/filler-pod-97ccb1c4-2001-11ea-9388-0242ac110004 to hunter-server-hu5at5svl7ps]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-97ccb1c4-2001-11ea-9388-0242ac110004.15e0da90ed0c26b0], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-97ccb1c4-2001-11ea-9388-0242ac110004.15e0da91879591a6], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-97ccb1c4-2001-11ea-9388-0242ac110004.15e0da91b114e6a7], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15e0da920bd1f355], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.]
STEP: removing the label node off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 12:43:11.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-xbw2h" for this suite.
Dec 16 12:43:21.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 12:43:21.744: INFO: namespace: e2e-tests-sched-pred-xbw2h, resource: bindings, ignored listing per whitelist
Dec 16 12:43:21.940: INFO: namespace e2e-tests-sched-pred-xbw2h deletion completed in 10.396801333s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:22.175 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 12:43:21.941: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Dec 16 12:43:22.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-25lws'
Dec 16 12:43:24.617: INFO: stderr: ""
Dec 16 12:43:24.617: INFO: stdout: "pod/pause created\n"
Dec 16 12:43:24.618: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Dec 16 12:43:24.618: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-25lws" to be "running and ready"
Dec 16 12:43:24.640: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 22.445507ms
Dec 16 12:43:26.973: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.355262113s
Dec 16 12:43:28.999: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.380585035s
Dec 16 12:43:31.167: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.549374783s
Dec 16 12:43:33.237: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.618738196s
Dec 16 12:43:35.248: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 10.630151785s
Dec 16 12:43:37.320: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 12.701690184s
Dec 16 12:43:37.320: INFO: Pod "pause" satisfied condition "running and ready"
Dec 16 12:43:37.320: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Dec 16 12:43:37.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-25lws'
Dec 16 12:43:37.494: INFO: stderr: ""
Dec 16 12:43:37.494: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Dec 16 12:43:37.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-25lws'
Dec 16 12:43:37.625: INFO: stderr: ""
Dec 16 12:43:37.625: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          13s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Dec 16 12:43:37.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-25lws'
Dec 16 12:43:37.855: INFO: stderr: ""
Dec 16 12:43:37.855: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Dec 16 12:43:37.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-25lws'
Dec 16 12:43:38.007: INFO: stderr: ""
Dec 16 12:43:38.007: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          13s   \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Dec 16 12:43:38.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-25lws'
Dec 16 12:43:38.210: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 16 12:43:38.211: INFO: stdout: "pod \"pause\" force deleted\n"
Dec 16 12:43:38.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-25lws'
Dec 16 12:43:38.401: INFO: stderr: "No resources found.\n"
Dec 16 12:43:38.401: INFO: stdout: ""
Dec 16 12:43:38.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-25lws -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 16 12:43:38.651: INFO: stderr: ""
Dec 16 12:43:38.651: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 12:43:38.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-25lws" for this suite.
Dec 16 12:43:44.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 12:43:44.877: INFO: namespace: e2e-tests-kubectl-25lws, resource: bindings, ignored listing per whitelist
Dec 16 12:43:44.970: INFO: namespace e2e-tests-kubectl-25lws deletion completed in 6.297496476s

• [SLOW TEST:23.029 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 12:43:44.971: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-kvrxf/configmap-test-b2ab2b86-2001-11ea-9388-0242ac110004
STEP: Creating a pod to test consume configMaps
Dec 16 12:43:45.374: INFO: Waiting up to 5m0s for pod "pod-configmaps-b2b5ea7c-2001-11ea-9388-0242ac110004" in namespace "e2e-tests-configmap-kvrxf" to be "success or failure"
Dec 16 12:43:45.396: INFO: Pod "pod-configmaps-b2b5ea7c-2001-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 22.166744ms
Dec 16 12:43:47.409: INFO: Pod "pod-configmaps-b2b5ea7c-2001-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035327599s
Dec 16 12:43:49.433: INFO: Pod "pod-configmaps-b2b5ea7c-2001-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058780354s
Dec 16 12:43:51.454: INFO: Pod "pod-configmaps-b2b5ea7c-2001-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079426283s
Dec 16 12:43:53.471: INFO: Pod "pod-configmaps-b2b5ea7c-2001-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.096860227s
Dec 16 12:43:55.786: INFO: Pod "pod-configmaps-b2b5ea7c-2001-11ea-9388-0242ac110004": Phase="Running", Reason="", readiness=true. Elapsed: 10.412161429s
Dec 16 12:43:57.808: INFO: Pod "pod-configmaps-b2b5ea7c-2001-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.433623098s
STEP: Saw pod success
Dec 16 12:43:57.808: INFO: Pod "pod-configmaps-b2b5ea7c-2001-11ea-9388-0242ac110004" satisfied condition "success or failure"
Dec 16 12:43:57.815: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-b2b5ea7c-2001-11ea-9388-0242ac110004 container env-test: 
STEP: delete the pod
Dec 16 12:43:58.184: INFO: Waiting for pod pod-configmaps-b2b5ea7c-2001-11ea-9388-0242ac110004 to disappear
Dec 16 12:43:58.403: INFO: Pod pod-configmaps-b2b5ea7c-2001-11ea-9388-0242ac110004 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 12:43:58.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-kvrxf" for this suite.
Dec 16 12:44:04.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 12:44:04.606: INFO: namespace: e2e-tests-configmap-kvrxf, resource: bindings, ignored listing per whitelist
Dec 16 12:44:04.692: INFO: namespace e2e-tests-configmap-kvrxf deletion completed in 6.270739175s

• [SLOW TEST:19.722 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 12:44:04.693: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 12:44:15.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-tbwd6" for this suite.
Dec 16 12:45:05.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 12:45:05.267: INFO: namespace: e2e-tests-kubelet-test-tbwd6, resource: bindings, ignored listing per whitelist
Dec 16 12:45:05.355: INFO: namespace e2e-tests-kubelet-test-tbwd6 deletion completed in 50.250503485s

• [SLOW TEST:60.662 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 12:45:05.356: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Dec 16 12:45:14.129: INFO: 10 pods remaining
Dec 16 12:45:14.130: INFO: 10 pods has nil DeletionTimestamp
Dec 16 12:45:14.130: INFO: 
Dec 16 12:45:16.420: INFO: 10 pods remaining
Dec 16 12:45:16.421: INFO: 10 pods has nil DeletionTimestamp
Dec 16 12:45:16.421: INFO: 
Dec 16 12:45:17.172: INFO: 10 pods remaining
Dec 16 12:45:17.172: INFO: 5 pods has nil DeletionTimestamp
Dec 16 12:45:17.172: INFO: 
Dec 16 12:45:19.172: INFO: 0 pods remaining
Dec 16 12:45:19.172: INFO: 0 pods has nil DeletionTimestamp
Dec 16 12:45:19.172: INFO: 
STEP: Gathering metrics
W1216 12:45:20.118899       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 16 12:45:20.119: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 12:45:20.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-xhxn2" for this suite.
Dec 16 12:45:36.300: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 12:45:36.520: INFO: namespace: e2e-tests-gc-xhxn2, resource: bindings, ignored listing per whitelist
Dec 16 12:45:36.595: INFO: namespace e2e-tests-gc-xhxn2 deletion completed in 16.472497208s

• [SLOW TEST:31.240 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 12:45:36.597: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 16 12:45:36.811: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f5225043-2001-11ea-9388-0242ac110004" in namespace "e2e-tests-projected-hqmzq" to be "success or failure"
Dec 16 12:45:36.827: INFO: Pod "downwardapi-volume-f5225043-2001-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 16.097085ms
Dec 16 12:45:38.865: INFO: Pod "downwardapi-volume-f5225043-2001-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054044982s
Dec 16 12:45:40.909: INFO: Pod "downwardapi-volume-f5225043-2001-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097502056s
Dec 16 12:45:43.184: INFO: Pod "downwardapi-volume-f5225043-2001-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.373093883s
Dec 16 12:45:46.024: INFO: Pod "downwardapi-volume-f5225043-2001-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.212676805s
Dec 16 12:45:48.044: INFO: Pod "downwardapi-volume-f5225043-2001-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.232850598s
STEP: Saw pod success
Dec 16 12:45:48.044: INFO: Pod "downwardapi-volume-f5225043-2001-11ea-9388-0242ac110004" satisfied condition "success or failure"
Dec 16 12:45:48.060: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-f5225043-2001-11ea-9388-0242ac110004 container client-container: 
STEP: delete the pod
Dec 16 12:45:48.189: INFO: Waiting for pod downwardapi-volume-f5225043-2001-11ea-9388-0242ac110004 to disappear
Dec 16 12:45:48.198: INFO: Pod downwardapi-volume-f5225043-2001-11ea-9388-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 12:45:48.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-hqmzq" for this suite.
Dec 16 12:45:57.508: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 12:45:57.597: INFO: namespace: e2e-tests-projected-hqmzq, resource: bindings, ignored listing per whitelist
Dec 16 12:45:57.656: INFO: namespace e2e-tests-projected-hqmzq deletion completed in 8.409967108s

• [SLOW TEST:21.059 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 12:45:57.657: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-01b49ab4-2002-11ea-9388-0242ac110004
STEP: Creating a pod to test consume configMaps
Dec 16 12:45:57.970: INFO: Waiting up to 5m0s for pod "pod-configmaps-01bdf01e-2002-11ea-9388-0242ac110004" in namespace "e2e-tests-configmap-ddpj5" to be "success or failure"
Dec 16 12:45:58.016: INFO: Pod "pod-configmaps-01bdf01e-2002-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 46.220706ms
Dec 16 12:46:00.495: INFO: Pod "pod-configmaps-01bdf01e-2002-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.524647465s
Dec 16 12:46:02.555: INFO: Pod "pod-configmaps-01bdf01e-2002-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.58448546s
Dec 16 12:46:04.819: INFO: Pod "pod-configmaps-01bdf01e-2002-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.84907639s
Dec 16 12:46:06.861: INFO: Pod "pod-configmaps-01bdf01e-2002-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.890668066s
Dec 16 12:46:08.893: INFO: Pod "pod-configmaps-01bdf01e-2002-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.923082123s
STEP: Saw pod success
Dec 16 12:46:08.894: INFO: Pod "pod-configmaps-01bdf01e-2002-11ea-9388-0242ac110004" satisfied condition "success or failure"
Dec 16 12:46:08.911: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-01bdf01e-2002-11ea-9388-0242ac110004 container configmap-volume-test: 
STEP: delete the pod
Dec 16 12:46:09.137: INFO: Waiting for pod pod-configmaps-01bdf01e-2002-11ea-9388-0242ac110004 to disappear
Dec 16 12:46:09.187: INFO: Pod pod-configmaps-01bdf01e-2002-11ea-9388-0242ac110004 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 12:46:09.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-ddpj5" for this suite.
Dec 16 12:46:15.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 12:46:15.352: INFO: namespace: e2e-tests-configmap-ddpj5, resource: bindings, ignored listing per whitelist
Dec 16 12:46:15.388: INFO: namespace e2e-tests-configmap-ddpj5 deletion completed in 6.192536419s

• [SLOW TEST:17.732 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 12:46:15.389: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 16 12:46:16.071: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Dec 16 12:46:21.088: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 16 12:46:27.113: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 16 12:46:27.219: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-l84bb,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-l84bb/deployments/test-cleanup-deployment,UID:1325d194-2002-11ea-a994-fa163e34d433,ResourceVersion:15014786,Generation:1,CreationTimestamp:2019-12-16 12:46:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Dec 16 12:46:27.270: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
Dec 16 12:46:27.270: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Dec 16 12:46:27.271: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-l84bb,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-l84bb/replicasets/test-cleanup-controller,UID:0c4f47f0-2002-11ea-a994-fa163e34d433,ResourceVersion:15014787,Generation:1,CreationTimestamp:2019-12-16 12:46:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 1325d194-2002-11ea-a994-fa163e34d433 0xc00269112f 0xc002691140}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 16 12:46:27.297: INFO: Pod "test-cleanup-controller-jtfg2" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-jtfg2,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-l84bb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-l84bb/pods/test-cleanup-controller-jtfg2,UID:0c8d6e1b-2002-11ea-a994-fa163e34d433,ResourceVersion:15014782,Generation:0,CreationTimestamp:2019-12-16 12:46:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 0c4f47f0-2002-11ea-a994-fa163e34d433 0xc002944e07 0xc002944e08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-7qb4l {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7qb4l,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-7qb4l true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002944e70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002944e90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:46:16 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:46:25 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:46:25 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 12:46:16 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2019-12-16 12:46:16 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-16 12:46:24 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://efc4f67cf1c11230c06869d7a672684d542d0792af83b6d09be2883962f378f5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 12:46:27.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-l84bb" for this suite.
Dec 16 12:46:35.464: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 12:46:35.602: INFO: namespace: e2e-tests-deployment-l84bb, resource: bindings, ignored listing per whitelist
Dec 16 12:46:35.665: INFO: namespace e2e-tests-deployment-l84bb deletion completed in 8.337805003s

• [SLOW TEST:20.277 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 12:46:35.666: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 12:46:48.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-ks49c" for this suite.
Dec 16 12:47:34.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 12:47:34.393: INFO: namespace: e2e-tests-kubelet-test-ks49c, resource: bindings, ignored listing per whitelist
Dec 16 12:47:34.470: INFO: namespace e2e-tests-kubelet-test-ks49c deletion completed in 46.216158478s

• [SLOW TEST:58.805 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 12:47:34.471: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-3b7d2be3-2002-11ea-9388-0242ac110004
STEP: Creating a pod to test consume configMaps
Dec 16 12:47:34.991: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3b8ec8b4-2002-11ea-9388-0242ac110004" in namespace "e2e-tests-projected-jqvts" to be "success or failure"
Dec 16 12:47:35.015: INFO: Pod "pod-projected-configmaps-3b8ec8b4-2002-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 23.532786ms
Dec 16 12:47:37.163: INFO: Pod "pod-projected-configmaps-3b8ec8b4-2002-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.171421084s
Dec 16 12:47:39.188: INFO: Pod "pod-projected-configmaps-3b8ec8b4-2002-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.196364331s
Dec 16 12:47:41.240: INFO: Pod "pod-projected-configmaps-3b8ec8b4-2002-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.248101922s
Dec 16 12:47:43.265: INFO: Pod "pod-projected-configmaps-3b8ec8b4-2002-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.272983534s
Dec 16 12:47:45.301: INFO: Pod "pod-projected-configmaps-3b8ec8b4-2002-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.309464582s
STEP: Saw pod success
Dec 16 12:47:45.302: INFO: Pod "pod-projected-configmaps-3b8ec8b4-2002-11ea-9388-0242ac110004" satisfied condition "success or failure"
Dec 16 12:47:45.316: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-3b8ec8b4-2002-11ea-9388-0242ac110004 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 16 12:47:45.468: INFO: Waiting for pod pod-projected-configmaps-3b8ec8b4-2002-11ea-9388-0242ac110004 to disappear
Dec 16 12:47:45.476: INFO: Pod pod-projected-configmaps-3b8ec8b4-2002-11ea-9388-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 12:47:45.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-jqvts" for this suite.
Dec 16 12:47:53.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 12:47:53.840: INFO: namespace: e2e-tests-projected-jqvts, resource: bindings, ignored listing per whitelist
Dec 16 12:47:53.919: INFO: namespace e2e-tests-projected-jqvts deletion completed in 8.426491796s

• [SLOW TEST:19.448 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 12:47:53.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-sbk2b
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 16 12:47:54.171: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 16 12:48:34.425: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-sbk2b PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 16 12:48:34.426: INFO: >>> kubeConfig: /root/.kube/config
Dec 16 12:48:35.243: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 12:48:35.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-sbk2b" for this suite.
Dec 16 12:49:01.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 12:49:01.490: INFO: namespace: e2e-tests-pod-network-test-sbk2b, resource: bindings, ignored listing per whitelist
Dec 16 12:49:01.676: INFO: namespace e2e-tests-pod-network-test-sbk2b deletion completed in 26.415733566s

• [SLOW TEST:67.756 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 12:49:01.677: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 16 12:49:02.078: INFO: Waiting up to 5m0s for pod "downward-api-6f6fd336-2002-11ea-9388-0242ac110004" in namespace "e2e-tests-downward-api-944q9" to be "success or failure"
Dec 16 12:49:02.090: INFO: Pod "downward-api-6f6fd336-2002-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.861497ms
Dec 16 12:49:04.183: INFO: Pod "downward-api-6f6fd336-2002-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103944031s
Dec 16 12:49:06.227: INFO: Pod "downward-api-6f6fd336-2002-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.14795697s
Dec 16 12:49:08.846: INFO: Pod "downward-api-6f6fd336-2002-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.766900195s
Dec 16 12:49:10.881: INFO: Pod "downward-api-6f6fd336-2002-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.802430248s
Dec 16 12:49:12.906: INFO: Pod "downward-api-6f6fd336-2002-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.827415416s
STEP: Saw pod success
Dec 16 12:49:12.906: INFO: Pod "downward-api-6f6fd336-2002-11ea-9388-0242ac110004" satisfied condition "success or failure"
Dec 16 12:49:12.916: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-6f6fd336-2002-11ea-9388-0242ac110004 container dapi-container: 
STEP: delete the pod
Dec 16 12:49:13.119: INFO: Waiting for pod downward-api-6f6fd336-2002-11ea-9388-0242ac110004 to disappear
Dec 16 12:49:13.183: INFO: Pod downward-api-6f6fd336-2002-11ea-9388-0242ac110004 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 12:49:13.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-944q9" for this suite.
Dec 16 12:49:19.306: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 12:49:19.483: INFO: namespace: e2e-tests-downward-api-944q9, resource: bindings, ignored listing per whitelist
Dec 16 12:49:19.564: INFO: namespace e2e-tests-downward-api-944q9 deletion completed in 6.307661856s

• [SLOW TEST:17.887 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 12:49:19.564: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 16 12:49:20.044: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7a149a43-2002-11ea-9388-0242ac110004" in namespace "e2e-tests-downward-api-jxknt" to be "success or failure"
Dec 16 12:49:20.054: INFO: Pod "downwardapi-volume-7a149a43-2002-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.585041ms
Dec 16 12:49:22.284: INFO: Pod "downwardapi-volume-7a149a43-2002-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.239176714s
Dec 16 12:49:24.308: INFO: Pod "downwardapi-volume-7a149a43-2002-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.263508764s
Dec 16 12:49:27.088: INFO: Pod "downwardapi-volume-7a149a43-2002-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.043362713s
Dec 16 12:49:29.109: INFO: Pod "downwardapi-volume-7a149a43-2002-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.064543669s
Dec 16 12:49:31.176: INFO: Pod "downwardapi-volume-7a149a43-2002-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.131158491s
STEP: Saw pod success
Dec 16 12:49:31.176: INFO: Pod "downwardapi-volume-7a149a43-2002-11ea-9388-0242ac110004" satisfied condition "success or failure"
Dec 16 12:49:31.224: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-7a149a43-2002-11ea-9388-0242ac110004 container client-container: 
STEP: delete the pod
Dec 16 12:49:31.570: INFO: Waiting for pod downwardapi-volume-7a149a43-2002-11ea-9388-0242ac110004 to disappear
Dec 16 12:49:31.607: INFO: Pod downwardapi-volume-7a149a43-2002-11ea-9388-0242ac110004 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 12:49:31.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-jxknt" for this suite.
Dec 16 12:49:37.776: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 12:49:37.896: INFO: namespace: e2e-tests-downward-api-jxknt, resource: bindings, ignored listing per whitelist
Dec 16 12:49:37.975: INFO: namespace e2e-tests-downward-api-jxknt deletion completed in 6.282928484s

• [SLOW TEST:18.412 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 12:49:37.976: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 12:50:38.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-trp4g" for this suite.
Dec 16 12:51:02.350: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 12:51:02.676: INFO: namespace: e2e-tests-container-probe-trp4g, resource: bindings, ignored listing per whitelist
Dec 16 12:51:02.698: INFO: namespace e2e-tests-container-probe-trp4g deletion completed in 24.46031638s

• [SLOW TEST:84.721 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 12:51:02.698: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Dec 16 12:51:02.959: INFO: Waiting up to 5m0s for pod "pod-b7819ca1-2002-11ea-9388-0242ac110004" in namespace "e2e-tests-emptydir-ptlfj" to be "success or failure"
Dec 16 12:51:02.996: INFO: Pod "pod-b7819ca1-2002-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 37.042966ms
Dec 16 12:51:05.584: INFO: Pod "pod-b7819ca1-2002-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.624751715s
Dec 16 12:51:07.607: INFO: Pod "pod-b7819ca1-2002-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.647688364s
Dec 16 12:51:10.017: INFO: Pod "pod-b7819ca1-2002-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.057956421s
Dec 16 12:51:12.037: INFO: Pod "pod-b7819ca1-2002-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.077846457s
Dec 16 12:51:14.063: INFO: Pod "pod-b7819ca1-2002-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.103842775s
Dec 16 12:51:16.115: INFO: Pod "pod-b7819ca1-2002-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.156582058s
STEP: Saw pod success
Dec 16 12:51:16.116: INFO: Pod "pod-b7819ca1-2002-11ea-9388-0242ac110004" satisfied condition "success or failure"
Dec 16 12:51:16.121: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-b7819ca1-2002-11ea-9388-0242ac110004 container test-container: 
STEP: delete the pod
Dec 16 12:51:16.574: INFO: Waiting for pod pod-b7819ca1-2002-11ea-9388-0242ac110004 to disappear
Dec 16 12:51:16.775: INFO: Pod pod-b7819ca1-2002-11ea-9388-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 12:51:16.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-ptlfj" for this suite.
Dec 16 12:51:24.814: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 12:51:24.930: INFO: namespace: e2e-tests-emptydir-ptlfj, resource: bindings, ignored listing per whitelist
Dec 16 12:51:25.021: INFO: namespace e2e-tests-emptydir-ptlfj deletion completed in 8.237325963s

• [SLOW TEST:22.323 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 12:51:25.022: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Dec 16 12:51:25.268: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Dec 16 12:51:25.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qvp9v'
Dec 16 12:51:25.895: INFO: stderr: ""
Dec 16 12:51:25.895: INFO: stdout: "service/redis-slave created\n"
Dec 16 12:51:25.896: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Dec 16 12:51:25.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qvp9v'
Dec 16 12:51:26.529: INFO: stderr: ""
Dec 16 12:51:26.529: INFO: stdout: "service/redis-master created\n"
Dec 16 12:51:26.543: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Dec 16 12:51:26.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qvp9v'
Dec 16 12:51:27.298: INFO: stderr: ""
Dec 16 12:51:27.299: INFO: stdout: "service/frontend created\n"
Dec 16 12:51:27.299: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Dec 16 12:51:27.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qvp9v'
Dec 16 12:51:27.743: INFO: stderr: ""
Dec 16 12:51:27.743: INFO: stdout: "deployment.extensions/frontend created\n"
Dec 16 12:51:27.744: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Dec 16 12:51:27.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qvp9v'
Dec 16 12:51:28.348: INFO: stderr: ""
Dec 16 12:51:28.348: INFO: stdout: "deployment.extensions/redis-master created\n"
Dec 16 12:51:28.351: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Dec 16 12:51:28.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qvp9v'
Dec 16 12:51:28.913: INFO: stderr: ""
Dec 16 12:51:28.914: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Dec 16 12:51:28.914: INFO: Waiting for all frontend pods to be Running.
Dec 16 12:52:03.972: INFO: Waiting for frontend to serve content.
Dec 16 12:52:05.594: INFO: Trying to add a new entry to the guestbook.
Dec 16 12:52:05.673: INFO: Verifying that added entry can be retrieved.
Dec 16 12:52:05.742: INFO: Failed to get response from guestbook. err: , response: {"data": ""}
STEP: using delete to clean up resources
Dec 16 12:52:10.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qvp9v'
Dec 16 12:52:11.149: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 16 12:52:11.149: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Dec 16 12:52:11.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qvp9v'
Dec 16 12:52:11.367: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 16 12:52:11.367: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Dec 16 12:52:11.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qvp9v'
Dec 16 12:52:11.533: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 16 12:52:11.533: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Dec 16 12:52:11.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qvp9v'
Dec 16 12:52:11.802: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 16 12:52:11.802: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Dec 16 12:52:11.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qvp9v'
Dec 16 12:52:12.199: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 16 12:52:12.200: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Dec 16 12:52:12.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qvp9v'
Dec 16 12:52:12.586: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 16 12:52:12.599: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 12:52:12.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-qvp9v" for this suite.
Dec 16 12:53:04.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 12:53:04.864: INFO: namespace: e2e-tests-kubectl-qvp9v, resource: bindings, ignored listing per whitelist
Dec 16 12:53:04.930: INFO: namespace e2e-tests-kubectl-qvp9v deletion completed in 52.26513219s

• [SLOW TEST:99.909 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 12:53:04.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Dec 16 12:53:17.338: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-0054710c-2003-11ea-9388-0242ac110004", GenerateName:"", Namespace:"e2e-tests-pods-ckfvt", SelfLink:"/api/v1/namespaces/e2e-tests-pods-ckfvt/pods/pod-submit-remove-0054710c-2003-11ea-9388-0242ac110004", UID:"005fdf6e-2003-11ea-a994-fa163e34d433", ResourceVersion:"15015679", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63712097585, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"74692781"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-g6klt", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001ac6240), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-g6klt", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001723b68), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000a4bd40), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001723ba0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001723d30)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001723d38), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001723d3c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712097585, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712097596, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712097596, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712097585, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc001984780), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001984800), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://85c9474b22c7c30553699af3db021f85d364a70f5990129c329e7d9d1ada4d17"}}, QOSClass:"BestEffort"}}
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 12:53:25.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-ckfvt" for this suite.
Dec 16 12:53:31.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 12:53:31.202: INFO: namespace: e2e-tests-pods-ckfvt, resource: bindings, ignored listing per whitelist
Dec 16 12:53:31.222: INFO: namespace e2e-tests-pods-ckfvt deletion completed in 6.17645677s

• [SLOW TEST:26.292 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 12:53:31.223: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Dec 16 12:53:55.590: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 16 12:53:55.599: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 16 12:53:57.600: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 16 12:53:57.619: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 16 12:53:59.599: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 16 12:53:59.611: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 16 12:54:01.600: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 16 12:54:01.618: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 16 12:54:03.600: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 16 12:54:03.629: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 16 12:54:05.599: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 16 12:54:05.617: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 16 12:54:07.600: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 16 12:54:07.628: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 16 12:54:09.600: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 16 12:54:09.618: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 16 12:54:11.599: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 16 12:54:11.619: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 16 12:54:13.600: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 16 12:54:13.653: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 16 12:54:15.600: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 16 12:54:15.614: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 16 12:54:17.599: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 16 12:54:17.609: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 16 12:54:19.600: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 16 12:54:19.643: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 12:54:19.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-slsgf" for this suite.
Dec 16 12:54:44.864: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 12:54:45.606: INFO: namespace: e2e-tests-container-lifecycle-hook-slsgf, resource: bindings, ignored listing per whitelist
Dec 16 12:54:45.615: INFO: namespace e2e-tests-container-lifecycle-hook-slsgf deletion completed in 25.928893387s

• [SLOW TEST:74.392 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 12:54:45.616: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 16 12:54:46.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-cc69n'
Dec 16 12:54:49.157: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 16 12:54:49.158: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Dec 16 12:54:51.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-cc69n'
Dec 16 12:54:54.590: INFO: stderr: ""
Dec 16 12:54:54.591: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 12:54:54.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-cc69n" for this suite.
Dec 16 12:55:23.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 12:55:23.716: INFO: namespace: e2e-tests-kubectl-cc69n, resource: bindings, ignored listing per whitelist
Dec 16 12:55:23.758: INFO: namespace e2e-tests-kubectl-cc69n deletion completed in 28.669420389s

• [SLOW TEST:38.142 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 12:55:23.759: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 16 12:55:24.403: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5358dd3f-2003-11ea-9388-0242ac110004" in namespace "e2e-tests-projected-gdhbc" to be "success or failure"
Dec 16 12:55:24.623: INFO: Pod "downwardapi-volume-5358dd3f-2003-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 219.717228ms
Dec 16 12:55:28.011: INFO: Pod "downwardapi-volume-5358dd3f-2003-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 3.607538672s
Dec 16 12:55:30.025: INFO: Pod "downwardapi-volume-5358dd3f-2003-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 5.621381782s
Dec 16 12:55:33.344: INFO: Pod "downwardapi-volume-5358dd3f-2003-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.940230234s
Dec 16 12:55:35.367: INFO: Pod "downwardapi-volume-5358dd3f-2003-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.963731905s
Dec 16 12:55:37.390: INFO: Pod "downwardapi-volume-5358dd3f-2003-11ea-9388-0242ac110004": Phase="Running", Reason="", readiness=true. Elapsed: 12.986769215s
Dec 16 12:55:40.030: INFO: Pod "downwardapi-volume-5358dd3f-2003-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.627112635s
STEP: Saw pod success
Dec 16 12:55:40.031: INFO: Pod "downwardapi-volume-5358dd3f-2003-11ea-9388-0242ac110004" satisfied condition "success or failure"
Dec 16 12:55:40.384: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-5358dd3f-2003-11ea-9388-0242ac110004 container client-container: 
STEP: delete the pod
Dec 16 12:55:40.612: INFO: Waiting for pod downwardapi-volume-5358dd3f-2003-11ea-9388-0242ac110004 to disappear
Dec 16 12:55:40.636: INFO: Pod downwardapi-volume-5358dd3f-2003-11ea-9388-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 12:55:40.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-gdhbc" for this suite.
Dec 16 12:55:46.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 12:55:46.962: INFO: namespace: e2e-tests-projected-gdhbc, resource: bindings, ignored listing per whitelist
Dec 16 12:55:47.075: INFO: namespace e2e-tests-projected-gdhbc deletion completed in 6.235214915s

• [SLOW TEST:23.316 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 12:55:47.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Dec 16 12:56:17.576: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jrdt9 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 16 12:56:17.576: INFO: >>> kubeConfig: /root/.kube/config
Dec 16 12:56:18.103: INFO: Exec stderr: ""
Dec 16 12:56:18.103: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jrdt9 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 16 12:56:18.103: INFO: >>> kubeConfig: /root/.kube/config
Dec 16 12:56:18.617: INFO: Exec stderr: ""
Dec 16 12:56:18.617: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jrdt9 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 16 12:56:18.618: INFO: >>> kubeConfig: /root/.kube/config
Dec 16 12:56:19.194: INFO: Exec stderr: ""
Dec 16 12:56:19.194: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jrdt9 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 16 12:56:19.194: INFO: >>> kubeConfig: /root/.kube/config
Dec 16 12:56:19.643: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Dec 16 12:56:19.644: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jrdt9 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 16 12:56:19.644: INFO: >>> kubeConfig: /root/.kube/config
Dec 16 12:56:19.999: INFO: Exec stderr: ""
Dec 16 12:56:19.999: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jrdt9 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 16 12:56:19.999: INFO: >>> kubeConfig: /root/.kube/config
Dec 16 12:56:20.410: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Dec 16 12:56:20.411: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jrdt9 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 16 12:56:20.411: INFO: >>> kubeConfig: /root/.kube/config
Dec 16 12:56:20.854: INFO: Exec stderr: ""
Dec 16 12:56:20.855: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jrdt9 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 16 12:56:20.855: INFO: >>> kubeConfig: /root/.kube/config
Dec 16 12:56:21.280: INFO: Exec stderr: ""
Dec 16 12:56:21.280: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jrdt9 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 16 12:56:21.280: INFO: >>> kubeConfig: /root/.kube/config
Dec 16 12:56:21.659: INFO: Exec stderr: ""
Dec 16 12:56:21.659: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jrdt9 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 16 12:56:21.659: INFO: >>> kubeConfig: /root/.kube/config
Dec 16 12:56:21.984: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 12:56:21.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-jrdt9" for this suite.
Dec 16 12:57:30.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 12:57:30.380: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-jrdt9, resource: bindings, ignored listing per whitelist
Dec 16 12:57:30.389: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-jrdt9 deletion completed in 1m8.377743976s

• [SLOW TEST:103.313 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 12:57:30.389: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Dec 16 12:57:30.638: INFO: Waiting up to 5m0s for pod "pod-9e9cad32-2003-11ea-9388-0242ac110004" in namespace "e2e-tests-emptydir-j4djs" to be "success or failure"
Dec 16 12:57:30.808: INFO: Pod "pod-9e9cad32-2003-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 169.401588ms
Dec 16 12:57:32.829: INFO: Pod "pod-9e9cad32-2003-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.190479085s
Dec 16 12:57:35.869: INFO: Pod "pod-9e9cad32-2003-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 5.230950214s
Dec 16 12:57:37.891: INFO: Pod "pod-9e9cad32-2003-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.252101795s
Dec 16 12:57:41.146: INFO: Pod "pod-9e9cad32-2003-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.507380267s
Dec 16 12:57:43.270: INFO: Pod "pod-9e9cad32-2003-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 12.631109904s
Dec 16 12:57:45.291: INFO: Pod "pod-9e9cad32-2003-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 14.652663778s
Dec 16 12:57:47.591: INFO: Pod "pod-9e9cad32-2003-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 16.952680837s
Dec 16 12:57:49.611: INFO: Pod "pod-9e9cad32-2003-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.972053554s
STEP: Saw pod success
Dec 16 12:57:49.611: INFO: Pod "pod-9e9cad32-2003-11ea-9388-0242ac110004" satisfied condition "success or failure"
Dec 16 12:57:49.631: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-9e9cad32-2003-11ea-9388-0242ac110004 container test-container: 
STEP: delete the pod
Dec 16 12:57:50.549: INFO: Waiting for pod pod-9e9cad32-2003-11ea-9388-0242ac110004 to disappear
Dec 16 12:57:50.924: INFO: Pod pod-9e9cad32-2003-11ea-9388-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 12:57:50.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-j4djs" for this suite.
Dec 16 12:57:57.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 12:57:57.227: INFO: namespace: e2e-tests-emptydir-j4djs, resource: bindings, ignored listing per whitelist
Dec 16 12:57:57.318: INFO: namespace e2e-tests-emptydir-j4djs deletion completed in 6.366361525s

• [SLOW TEST:26.929 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 12:57:57.320: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Dec 16 12:57:57.637: INFO: namespace e2e-tests-kubectl-znch4
Dec 16 12:57:57.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-znch4'
Dec 16 12:57:58.175: INFO: stderr: ""
Dec 16 12:57:58.175: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 16 12:58:00.333: INFO: Selector matched 1 pods for map[app:redis]
Dec 16 12:58:00.333: INFO: Found 0 / 1
Dec 16 12:58:01.215: INFO: Selector matched 1 pods for map[app:redis]
Dec 16 12:58:01.215: INFO: Found 0 / 1
Dec 16 12:58:02.240: INFO: Selector matched 1 pods for map[app:redis]
Dec 16 12:58:02.240: INFO: Found 0 / 1
Dec 16 12:58:03.191: INFO: Selector matched 1 pods for map[app:redis]
Dec 16 12:58:03.191: INFO: Found 0 / 1
Dec 16 12:58:04.193: INFO: Selector matched 1 pods for map[app:redis]
Dec 16 12:58:04.193: INFO: Found 0 / 1
Dec 16 12:58:05.659: INFO: Selector matched 1 pods for map[app:redis]
Dec 16 12:58:05.659: INFO: Found 0 / 1
Dec 16 12:58:06.417: INFO: Selector matched 1 pods for map[app:redis]
Dec 16 12:58:06.418: INFO: Found 0 / 1
Dec 16 12:58:07.194: INFO: Selector matched 1 pods for map[app:redis]
Dec 16 12:58:07.194: INFO: Found 0 / 1
Dec 16 12:58:08.209: INFO: Selector matched 1 pods for map[app:redis]
Dec 16 12:58:08.210: INFO: Found 0 / 1
Dec 16 12:58:09.197: INFO: Selector matched 1 pods for map[app:redis]
Dec 16 12:58:09.198: INFO: Found 0 / 1
Dec 16 12:58:10.185: INFO: Selector matched 1 pods for map[app:redis]
Dec 16 12:58:10.185: INFO: Found 1 / 1
Dec 16 12:58:10.185: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 16 12:58:10.271: INFO: Selector matched 1 pods for map[app:redis]
Dec 16 12:58:10.271: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 16 12:58:10.271: INFO: wait on redis-master startup in e2e-tests-kubectl-znch4 
Dec 16 12:58:10.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-m8wlj redis-master --namespace=e2e-tests-kubectl-znch4'
Dec 16 12:58:10.663: INFO: stderr: ""
Dec 16 12:58:10.663: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 16 Dec 12:58:08.787 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 16 Dec 12:58:08.787 # Server started, Redis version 3.2.12\n1:M 16 Dec 12:58:08.787 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 16 Dec 12:58:08.787 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Dec 16 12:58:10.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-znch4'
Dec 16 12:58:10.855: INFO: stderr: ""
Dec 16 12:58:10.855: INFO: stdout: "service/rm2 exposed\n"
Dec 16 12:58:10.913: INFO: Service rm2 in namespace e2e-tests-kubectl-znch4 found.
STEP: exposing service
Dec 16 12:58:12.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-znch4'
Dec 16 12:58:13.513: INFO: stderr: ""
Dec 16 12:58:13.513: INFO: stdout: "service/rm3 exposed\n"
Dec 16 12:58:13.693: INFO: Service rm3 in namespace e2e-tests-kubectl-znch4 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 12:58:15.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-znch4" for this suite.
Dec 16 12:58:39.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 12:58:40.135: INFO: namespace: e2e-tests-kubectl-znch4, resource: bindings, ignored listing per whitelist
Dec 16 12:58:40.155: INFO: namespace e2e-tests-kubectl-znch4 deletion completed in 24.400227592s

• [SLOW TEST:42.836 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 12:58:40.156: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-679j
STEP: Creating a pod to test atomic-volume-subpath
Dec 16 12:58:40.411: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-679j" in namespace "e2e-tests-subpath-92hz2" to be "success or failure"
Dec 16 12:58:40.613: INFO: Pod "pod-subpath-test-secret-679j": Phase="Pending", Reason="", readiness=false. Elapsed: 201.766891ms
Dec 16 12:58:43.249: INFO: Pod "pod-subpath-test-secret-679j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.83775442s
Dec 16 12:58:45.274: INFO: Pod "pod-subpath-test-secret-679j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.86324068s
Dec 16 12:58:47.993: INFO: Pod "pod-subpath-test-secret-679j": Phase="Pending", Reason="", readiness=false. Elapsed: 7.581854382s
Dec 16 12:58:50.224: INFO: Pod "pod-subpath-test-secret-679j": Phase="Pending", Reason="", readiness=false. Elapsed: 9.812920683s
Dec 16 12:58:52.245: INFO: Pod "pod-subpath-test-secret-679j": Phase="Pending", Reason="", readiness=false. Elapsed: 11.833863613s
Dec 16 12:58:54.266: INFO: Pod "pod-subpath-test-secret-679j": Phase="Pending", Reason="", readiness=false. Elapsed: 13.854883592s
Dec 16 12:58:56.286: INFO: Pod "pod-subpath-test-secret-679j": Phase="Pending", Reason="", readiness=false. Elapsed: 15.875448537s
Dec 16 12:58:58.306: INFO: Pod "pod-subpath-test-secret-679j": Phase="Pending", Reason="", readiness=false. Elapsed: 17.894832946s
Dec 16 12:59:00.760: INFO: Pod "pod-subpath-test-secret-679j": Phase="Pending", Reason="", readiness=false. Elapsed: 20.348881523s
Dec 16 12:59:02.801: INFO: Pod "pod-subpath-test-secret-679j": Phase="Pending", Reason="", readiness=false. Elapsed: 22.390446739s
Dec 16 12:59:04.828: INFO: Pod "pod-subpath-test-secret-679j": Phase="Pending", Reason="", readiness=false. Elapsed: 24.417008795s
Dec 16 12:59:06.852: INFO: Pod "pod-subpath-test-secret-679j": Phase="Running", Reason="", readiness=false. Elapsed: 26.440650365s
Dec 16 12:59:08.889: INFO: Pod "pod-subpath-test-secret-679j": Phase="Running", Reason="", readiness=false. Elapsed: 28.478427093s
Dec 16 12:59:10.925: INFO: Pod "pod-subpath-test-secret-679j": Phase="Running", Reason="", readiness=false. Elapsed: 30.514454038s
Dec 16 12:59:12.954: INFO: Pod "pod-subpath-test-secret-679j": Phase="Running", Reason="", readiness=false. Elapsed: 32.543344503s
Dec 16 12:59:14.976: INFO: Pod "pod-subpath-test-secret-679j": Phase="Running", Reason="", readiness=false. Elapsed: 34.565209567s
Dec 16 12:59:17.028: INFO: Pod "pod-subpath-test-secret-679j": Phase="Running", Reason="", readiness=false. Elapsed: 36.616952699s
Dec 16 12:59:19.038: INFO: Pod "pod-subpath-test-secret-679j": Phase="Running", Reason="", readiness=false. Elapsed: 38.626737193s
Dec 16 12:59:21.592: INFO: Pod "pod-subpath-test-secret-679j": Phase="Running", Reason="", readiness=false. Elapsed: 41.181501493s
Dec 16 12:59:23.624: INFO: Pod "pod-subpath-test-secret-679j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 43.213189004s
STEP: Saw pod success
Dec 16 12:59:23.625: INFO: Pod "pod-subpath-test-secret-679j" satisfied condition "success or failure"
Dec 16 12:59:23.649: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-679j container test-container-subpath-secret-679j: 
STEP: delete the pod
Dec 16 12:59:25.591: INFO: Waiting for pod pod-subpath-test-secret-679j to disappear
Dec 16 12:59:25.624: INFO: Pod pod-subpath-test-secret-679j no longer exists
STEP: Deleting pod pod-subpath-test-secret-679j
Dec 16 12:59:25.624: INFO: Deleting pod "pod-subpath-test-secret-679j" in namespace "e2e-tests-subpath-92hz2"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 12:59:25.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-92hz2" for this suite.
Dec 16 12:59:31.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 12:59:32.152: INFO: namespace: e2e-tests-subpath-92hz2, resource: bindings, ignored listing per whitelist
Dec 16 12:59:32.197: INFO: namespace e2e-tests-subpath-92hz2 deletion completed in 6.550830803s

• [SLOW TEST:52.041 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 12:59:32.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Dec 16 12:59:33.166: INFO: Number of nodes with available pods: 0
Dec 16 12:59:33.166: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 16 12:59:34.275: INFO: Number of nodes with available pods: 0
Dec 16 12:59:34.275: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 16 12:59:35.739: INFO: Number of nodes with available pods: 0
Dec 16 12:59:35.739: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 16 12:59:36.621: INFO: Number of nodes with available pods: 0
Dec 16 12:59:36.622: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 16 12:59:37.199: INFO: Number of nodes with available pods: 0
Dec 16 12:59:37.200: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 16 12:59:38.194: INFO: Number of nodes with available pods: 0
Dec 16 12:59:38.194: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 16 12:59:39.218: INFO: Number of nodes with available pods: 0
Dec 16 12:59:39.219: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 16 12:59:40.249: INFO: Number of nodes with available pods: 0
Dec 16 12:59:40.249: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 16 12:59:41.185: INFO: Number of nodes with available pods: 0
Dec 16 12:59:41.185: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 16 12:59:44.226: INFO: Number of nodes with available pods: 0
Dec 16 12:59:44.227: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 16 12:59:45.189: INFO: Number of nodes with available pods: 0
Dec 16 12:59:45.190: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 16 12:59:46.277: INFO: Number of nodes with available pods: 0
Dec 16 12:59:46.277: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 16 12:59:47.200: INFO: Number of nodes with available pods: 0
Dec 16 12:59:47.201: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 16 12:59:48.197: INFO: Number of nodes with available pods: 1
Dec 16 12:59:48.197: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Dec 16 12:59:48.650: INFO: Number of nodes with available pods: 0
Dec 16 12:59:48.651: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 16 12:59:50.877: INFO: Number of nodes with available pods: 0
Dec 16 12:59:50.877: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 16 12:59:52.528: INFO: Number of nodes with available pods: 0
Dec 16 12:59:52.529: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 16 12:59:53.047: INFO: Number of nodes with available pods: 0
Dec 16 12:59:53.047: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 16 12:59:55.725: INFO: Number of nodes with available pods: 0
Dec 16 12:59:55.725: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 16 12:59:56.899: INFO: Number of nodes with available pods: 0
Dec 16 12:59:56.900: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 16 12:59:57.679: INFO: Number of nodes with available pods: 0
Dec 16 12:59:57.679: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 16 12:59:58.688: INFO: Number of nodes with available pods: 0
Dec 16 12:59:58.688: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 16 13:00:01.181: INFO: Number of nodes with available pods: 0
Dec 16 13:00:01.182: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 16 13:00:02.539: INFO: Number of nodes with available pods: 0
Dec 16 13:00:02.540: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 16 13:00:02.678: INFO: Number of nodes with available pods: 0
Dec 16 13:00:02.678: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 16 13:00:03.723: INFO: Number of nodes with available pods: 0
Dec 16 13:00:03.723: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 16 13:00:04.694: INFO: Number of nodes with available pods: 0
Dec 16 13:00:04.694: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 16 13:00:05.703: INFO: Number of nodes with available pods: 1
Dec 16 13:00:05.703: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-rkmrg, will wait for the garbage collector to delete the pods
Dec 16 13:00:05.796: INFO: Deleting DaemonSet.extensions daemon-set took: 17.300932ms
Dec 16 13:00:05.897: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.681291ms
Dec 16 13:00:22.664: INFO: Number of nodes with available pods: 0
Dec 16 13:00:22.664: INFO: Number of running nodes: 0, number of available pods: 0
Dec 16 13:00:22.676: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-rkmrg/daemonsets","resourceVersion":"15016477"},"items":null}

Dec 16 13:00:22.681: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-rkmrg/pods","resourceVersion":"15016477"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 13:00:22.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-rkmrg" for this suite.
Dec 16 13:00:30.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 13:00:30.905: INFO: namespace: e2e-tests-daemonsets-rkmrg, resource: bindings, ignored listing per whitelist
Dec 16 13:00:30.999: INFO: namespace e2e-tests-daemonsets-rkmrg deletion completed in 8.289588457s

• [SLOW TEST:58.800 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 13:00:30.999: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-0a5b74f2-2004-11ea-9388-0242ac110004
STEP: Creating secret with name s-test-opt-upd-0a5b79db-2004-11ea-9388-0242ac110004
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-0a5b74f2-2004-11ea-9388-0242ac110004
STEP: Updating secret s-test-opt-upd-0a5b79db-2004-11ea-9388-0242ac110004
STEP: Creating secret with name s-test-opt-create-0a5b7a7a-2004-11ea-9388-0242ac110004
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 13:01:59.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-f4mpd" for this suite.
Dec 16 13:02:23.533: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 13:02:23.575: INFO: namespace: e2e-tests-projected-f4mpd, resource: bindings, ignored listing per whitelist
Dec 16 13:02:23.771: INFO: namespace e2e-tests-projected-f4mpd deletion completed in 24.470282849s

• [SLOW TEST:112.772 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 13:02:23.772: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Dec 16 13:02:24.253: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 16 13:02:24.264: INFO: Waiting for terminating namespaces to be deleted...
Dec 16 13:02:24.268: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Dec 16 13:02:24.283: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 16 13:02:24.283: INFO: 	Container coredns ready: true, restart count 0
Dec 16 13:02:24.283: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Dec 16 13:02:24.283: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 16 13:02:24.283: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 16 13:02:24.283: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Dec 16 13:02:24.283: INFO: 	Container weave ready: true, restart count 0
Dec 16 13:02:24.283: INFO: 	Container weave-npc ready: true, restart count 0
Dec 16 13:02:24.283: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 16 13:02:24.283: INFO: 	Container coredns ready: true, restart count 0
Dec 16 13:02:24.283: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 16 13:02:24.283: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 16 13:02:24.283: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15e0db9eb8e48779], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 13:02:25.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-q2hlv" for this suite.
Dec 16 13:02:33.541: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 13:02:33.742: INFO: namespace: e2e-tests-sched-pred-q2hlv, resource: bindings, ignored listing per whitelist
Dec 16 13:02:33.962: INFO: namespace e2e-tests-sched-pred-q2hlv deletion completed in 8.453389339s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:10.192 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 13:02:33.965: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-53ae0b16-2004-11ea-9388-0242ac110004
STEP: Creating a pod to test consume configMaps
Dec 16 13:02:34.448: INFO: Waiting up to 5m0s for pod "pod-configmaps-53b1ce8a-2004-11ea-9388-0242ac110004" in namespace "e2e-tests-configmap-79gwz" to be "success or failure"
Dec 16 13:02:34.497: INFO: Pod "pod-configmaps-53b1ce8a-2004-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 48.950382ms
Dec 16 13:02:36.644: INFO: Pod "pod-configmaps-53b1ce8a-2004-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195303513s
Dec 16 13:02:38.660: INFO: Pod "pod-configmaps-53b1ce8a-2004-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.212031075s
Dec 16 13:02:40.698: INFO: Pod "pod-configmaps-53b1ce8a-2004-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.249987637s
Dec 16 13:02:43.202: INFO: Pod "pod-configmaps-53b1ce8a-2004-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.75346336s
Dec 16 13:02:45.241: INFO: Pod "pod-configmaps-53b1ce8a-2004-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.792453572s
Dec 16 13:02:47.257: INFO: Pod "pod-configmaps-53b1ce8a-2004-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 12.808400611s
Dec 16 13:02:49.363: INFO: Pod "pod-configmaps-53b1ce8a-2004-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.914958261s
STEP: Saw pod success
Dec 16 13:02:49.364: INFO: Pod "pod-configmaps-53b1ce8a-2004-11ea-9388-0242ac110004" satisfied condition "success or failure"
Dec 16 13:02:49.421: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-53b1ce8a-2004-11ea-9388-0242ac110004 container configmap-volume-test: 
STEP: delete the pod
Dec 16 13:02:49.670: INFO: Waiting for pod pod-configmaps-53b1ce8a-2004-11ea-9388-0242ac110004 to disappear
Dec 16 13:02:49.684: INFO: Pod pod-configmaps-53b1ce8a-2004-11ea-9388-0242ac110004 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 13:02:49.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-79gwz" for this suite.
Dec 16 13:02:55.911: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 13:02:55.951: INFO: namespace: e2e-tests-configmap-79gwz, resource: bindings, ignored listing per whitelist
Dec 16 13:02:56.169: INFO: namespace e2e-tests-configmap-79gwz deletion completed in 6.472568157s

• [SLOW TEST:22.204 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 13:02:56.170: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Dec 16 13:03:08.501: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-60c53daf-2004-11ea-9388-0242ac110004,GenerateName:,Namespace:e2e-tests-events-26l9m,SelfLink:/api/v1/namespaces/e2e-tests-events-26l9m/pods/send-events-60c53daf-2004-11ea-9388-0242ac110004,UID:60c66f52-2004-11ea-a994-fa163e34d433,ResourceVersion:15016782,Generation:0,CreationTimestamp:2019-12-16 13:02:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 371893734,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-7bnnd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7bnnd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-7bnnd true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0024a9ab0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0024a9ae0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 13:02:56 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 13:03:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 13:03:06 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-16 13:02:56 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2019-12-16 13:02:56 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2019-12-16 13:03:05 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://11218cd031c2881c669f3266f1804c2ac6926a7edaa2fc2320d287b63f761f26}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Dec 16 13:03:10.531: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Dec 16 13:03:12.561: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 13:03:12.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-26l9m" for this suite.
Dec 16 13:03:54.738: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 13:03:54.786: INFO: namespace: e2e-tests-events-26l9m, resource: bindings, ignored listing per whitelist
Dec 16 13:03:54.848: INFO: namespace e2e-tests-events-26l9m deletion completed in 42.209727542s

• [SLOW TEST:58.678 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 13:03:54.849: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 16 13:04:23.115: INFO: Container started at 2019-12-16 13:04:06 +0000 UTC, pod became ready at 2019-12-16 13:04:22 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 13:04:23.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-95kqm" for this suite.
Dec 16 13:04:47.185: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 13:04:47.411: INFO: namespace: e2e-tests-container-probe-95kqm, resource: bindings, ignored listing per whitelist
Dec 16 13:04:47.428: INFO: namespace e2e-tests-container-probe-95kqm deletion completed in 24.281357921s

• [SLOW TEST:52.579 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 13:04:47.428: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Dec 16 13:04:47.809: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-4bp5c,SelfLink:/api/v1/namespaces/e2e-tests-watch-4bp5c/configmaps/e2e-watch-test-watch-closed,UID:a32ab69d-2004-11ea-a994-fa163e34d433,ResourceVersion:15016943,Generation:0,CreationTimestamp:2019-12-16 13:04:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 16 13:04:47.811: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-4bp5c,SelfLink:/api/v1/namespaces/e2e-tests-watch-4bp5c/configmaps/e2e-watch-test-watch-closed,UID:a32ab69d-2004-11ea-a994-fa163e34d433,ResourceVersion:15016944,Generation:0,CreationTimestamp:2019-12-16 13:04:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Dec 16 13:04:47.925: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-4bp5c,SelfLink:/api/v1/namespaces/e2e-tests-watch-4bp5c/configmaps/e2e-watch-test-watch-closed,UID:a32ab69d-2004-11ea-a994-fa163e34d433,ResourceVersion:15016945,Generation:0,CreationTimestamp:2019-12-16 13:04:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 16 13:04:47.926: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-4bp5c,SelfLink:/api/v1/namespaces/e2e-tests-watch-4bp5c/configmaps/e2e-watch-test-watch-closed,UID:a32ab69d-2004-11ea-a994-fa163e34d433,ResourceVersion:15016946,Generation:0,CreationTimestamp:2019-12-16 13:04:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 13:04:47.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-4bp5c" for this suite.
Dec 16 13:04:53.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 13:04:54.116: INFO: namespace: e2e-tests-watch-4bp5c, resource: bindings, ignored listing per whitelist
Dec 16 13:04:54.185: INFO: namespace e2e-tests-watch-4bp5c deletion completed in 6.248397073s

• [SLOW TEST:6.757 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 13:04:54.187: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Dec 16 13:05:20.640: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 16 13:05:20.722: INFO: Pod pod-with-prestop-http-hook still exists
Dec 16 13:05:22.724: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 16 13:05:22.788: INFO: Pod pod-with-prestop-http-hook still exists
Dec 16 13:05:24.723: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 16 13:05:24.742: INFO: Pod pod-with-prestop-http-hook still exists
Dec 16 13:05:26.724: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 16 13:05:27.433: INFO: Pod pod-with-prestop-http-hook still exists
Dec 16 13:05:28.723: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 16 13:05:28.755: INFO: Pod pod-with-prestop-http-hook still exists
Dec 16 13:05:30.724: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 16 13:05:31.164: INFO: Pod pod-with-prestop-http-hook still exists
Dec 16 13:05:32.723: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 16 13:05:32.787: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 13:05:32.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-pztpk" for this suite.
Dec 16 13:05:59.001: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 13:05:59.242: INFO: namespace: e2e-tests-container-lifecycle-hook-pztpk, resource: bindings, ignored listing per whitelist
Dec 16 13:05:59.254: INFO: namespace e2e-tests-container-lifecycle-hook-pztpk deletion completed in 26.420177907s

• [SLOW TEST:65.067 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 13:05:59.255: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Dec 16 13:05:59.472: INFO: Pod name pod-release: Found 0 pods out of 1
Dec 16 13:06:04.524: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 13:06:04.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-bktms" for this suite.
Dec 16 13:06:19.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 13:06:19.993: INFO: namespace: e2e-tests-replication-controller-bktms, resource: bindings, ignored listing per whitelist
Dec 16 13:06:20.058: INFO: namespace e2e-tests-replication-controller-bktms deletion completed in 15.078681436s

• [SLOW TEST:20.804 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 13:06:20.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-daea36ad-2004-11ea-9388-0242ac110004
STEP: Creating a pod to test consume secrets
Dec 16 13:06:21.342: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-daeca0c9-2004-11ea-9388-0242ac110004" in namespace "e2e-tests-projected-4bsd4" to be "success or failure"
Dec 16 13:06:21.359: INFO: Pod "pod-projected-secrets-daeca0c9-2004-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 17.013284ms
Dec 16 13:06:23.421: INFO: Pod "pod-projected-secrets-daeca0c9-2004-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07908281s
Dec 16 13:06:25.450: INFO: Pod "pod-projected-secrets-daeca0c9-2004-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.108491165s
Dec 16 13:06:27.954: INFO: Pod "pod-projected-secrets-daeca0c9-2004-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.612478585s
Dec 16 13:06:29.998: INFO: Pod "pod-projected-secrets-daeca0c9-2004-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.656493552s
Dec 16 13:06:32.684: INFO: Pod "pod-projected-secrets-daeca0c9-2004-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.341785727s
Dec 16 13:06:34.743: INFO: Pod "pod-projected-secrets-daeca0c9-2004-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.400958003s
STEP: Saw pod success
Dec 16 13:06:34.744: INFO: Pod "pod-projected-secrets-daeca0c9-2004-11ea-9388-0242ac110004" satisfied condition "success or failure"
Dec 16 13:06:34.761: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-daeca0c9-2004-11ea-9388-0242ac110004 container projected-secret-volume-test: 
STEP: delete the pod
Dec 16 13:06:35.200: INFO: Waiting for pod pod-projected-secrets-daeca0c9-2004-11ea-9388-0242ac110004 to disappear
Dec 16 13:06:35.207: INFO: Pod pod-projected-secrets-daeca0c9-2004-11ea-9388-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 13:06:35.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-4bsd4" for this suite.
Dec 16 13:06:42.564: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 13:06:43.163: INFO: namespace: e2e-tests-projected-4bsd4, resource: bindings, ignored listing per whitelist
Dec 16 13:06:43.163: INFO: namespace e2e-tests-projected-4bsd4 deletion completed in 7.945059293s

• [SLOW TEST:23.105 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 13:06:43.164: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Dec 16 13:06:43.496: INFO: PodSpec: initContainers in spec.initContainers
Dec 16 13:08:02.023: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-e825be56-2004-11ea-9388-0242ac110004", GenerateName:"", Namespace:"e2e-tests-init-container-nsh68", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-nsh68/pods/pod-init-e825be56-2004-11ea-9388-0242ac110004", UID:"e826f3aa-2004-11ea-a994-fa163e34d433", ResourceVersion:"15017308", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63712098403, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"496754723"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-55fbf", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002460040), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-55fbf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-55fbf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}, "cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-55fbf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002a42148), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0022ea000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002a421c0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002a421e0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002a421e8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002a421ec)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712098403, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712098403, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712098403, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712098403, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc0027802e0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002ae52d0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002ae5340)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://4b3eb77426b127d1b73b358e0a42024cd845e5164f428e5f0af4b38e4a2350f7"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002780320), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002780300), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 13:08:02.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-nsh68" for this suite.
Dec 16 13:08:18.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 13:08:18.451: INFO: namespace: e2e-tests-init-container-nsh68, resource: bindings, ignored listing per whitelist
Dec 16 13:08:18.562: INFO: namespace e2e-tests-init-container-nsh68 deletion completed in 16.375421743s

• [SLOW TEST:95.398 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 13:08:18.564: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-213d228a-2005-11ea-9388-0242ac110004
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-213d228a-2005-11ea-9388-0242ac110004
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 13:08:35.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-ncm2p" for this suite.
Dec 16 13:09:02.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 13:09:02.361: INFO: namespace: e2e-tests-configmap-ncm2p, resource: bindings, ignored listing per whitelist
Dec 16 13:09:02.366: INFO: namespace e2e-tests-configmap-ncm2p deletion completed in 26.366741115s

• [SLOW TEST:43.802 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 13:09:02.366: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-qvj5
STEP: Creating a pod to test atomic-volume-subpath
Dec 16 13:09:02.948: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-qvj5" in namespace "e2e-tests-subpath-xkfqp" to be "success or failure"
Dec 16 13:09:02.978: INFO: Pod "pod-subpath-test-configmap-qvj5": Phase="Pending", Reason="", readiness=false. Elapsed: 30.146557ms
Dec 16 13:09:05.578: INFO: Pod "pod-subpath-test-configmap-qvj5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.629521885s
Dec 16 13:09:07.626: INFO: Pod "pod-subpath-test-configmap-qvj5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.677634187s
Dec 16 13:09:10.556: INFO: Pod "pod-subpath-test-configmap-qvj5": Phase="Pending", Reason="", readiness=false. Elapsed: 7.608173257s
Dec 16 13:09:12.592: INFO: Pod "pod-subpath-test-configmap-qvj5": Phase="Pending", Reason="", readiness=false. Elapsed: 9.643792067s
Dec 16 13:09:14.620: INFO: Pod "pod-subpath-test-configmap-qvj5": Phase="Pending", Reason="", readiness=false. Elapsed: 11.672007208s
Dec 16 13:09:16.679: INFO: Pod "pod-subpath-test-configmap-qvj5": Phase="Pending", Reason="", readiness=false. Elapsed: 13.73102576s
Dec 16 13:09:18.699: INFO: Pod "pod-subpath-test-configmap-qvj5": Phase="Pending", Reason="", readiness=false. Elapsed: 15.75139877s
Dec 16 13:09:20.758: INFO: Pod "pod-subpath-test-configmap-qvj5": Phase="Pending", Reason="", readiness=false. Elapsed: 17.810166523s
Dec 16 13:09:22.780: INFO: Pod "pod-subpath-test-configmap-qvj5": Phase="Running", Reason="", readiness=false. Elapsed: 19.832427919s
Dec 16 13:09:24.794: INFO: Pod "pod-subpath-test-configmap-qvj5": Phase="Running", Reason="", readiness=false. Elapsed: 21.846179401s
Dec 16 13:09:26.808: INFO: Pod "pod-subpath-test-configmap-qvj5": Phase="Running", Reason="", readiness=false. Elapsed: 23.860160508s
Dec 16 13:09:28.837: INFO: Pod "pod-subpath-test-configmap-qvj5": Phase="Running", Reason="", readiness=false. Elapsed: 25.889390556s
Dec 16 13:09:30.883: INFO: Pod "pod-subpath-test-configmap-qvj5": Phase="Running", Reason="", readiness=false. Elapsed: 27.935469025s
Dec 16 13:09:32.904: INFO: Pod "pod-subpath-test-configmap-qvj5": Phase="Running", Reason="", readiness=false. Elapsed: 29.956055798s
Dec 16 13:09:34.918: INFO: Pod "pod-subpath-test-configmap-qvj5": Phase="Running", Reason="", readiness=false. Elapsed: 31.970126054s
Dec 16 13:09:36.965: INFO: Pod "pod-subpath-test-configmap-qvj5": Phase="Running", Reason="", readiness=false. Elapsed: 34.016810548s
Dec 16 13:09:39.034: INFO: Pod "pod-subpath-test-configmap-qvj5": Phase="Running", Reason="", readiness=false. Elapsed: 36.086109649s
Dec 16 13:09:41.045: INFO: Pod "pod-subpath-test-configmap-qvj5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.097377559s
STEP: Saw pod success
Dec 16 13:09:41.046: INFO: Pod "pod-subpath-test-configmap-qvj5" satisfied condition "success or failure"
Dec 16 13:09:41.050: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-qvj5 container test-container-subpath-configmap-qvj5: 
STEP: delete the pod
Dec 16 13:09:41.910: INFO: Waiting for pod pod-subpath-test-configmap-qvj5 to disappear
Dec 16 13:09:42.522: INFO: Pod pod-subpath-test-configmap-qvj5 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-qvj5
Dec 16 13:09:42.522: INFO: Deleting pod "pod-subpath-test-configmap-qvj5" in namespace "e2e-tests-subpath-xkfqp"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 13:09:42.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-xkfqp" for this suite.
Dec 16 13:09:48.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 13:09:48.913: INFO: namespace: e2e-tests-subpath-xkfqp, resource: bindings, ignored listing per whitelist
Dec 16 13:09:49.119: INFO: namespace e2e-tests-subpath-xkfqp deletion completed in 6.420707098s

• [SLOW TEST:46.753 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 13:09:49.120: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 16 13:09:49.651: INFO: Waiting up to 5m0s for pod "downwardapi-volume-570ea5e7-2005-11ea-9388-0242ac110004" in namespace "e2e-tests-projected-pph5f" to be "success or failure"
Dec 16 13:09:49.768: INFO: Pod "downwardapi-volume-570ea5e7-2005-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 116.777894ms
Dec 16 13:09:51.989: INFO: Pod "downwardapi-volume-570ea5e7-2005-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.337049221s
Dec 16 13:09:54.051: INFO: Pod "downwardapi-volume-570ea5e7-2005-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.399102537s
Dec 16 13:09:56.066: INFO: Pod "downwardapi-volume-570ea5e7-2005-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.414408985s
Dec 16 13:09:58.340: INFO: Pod "downwardapi-volume-570ea5e7-2005-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.68857442s
Dec 16 13:10:00.644: INFO: Pod "downwardapi-volume-570ea5e7-2005-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.992043851s
Dec 16 13:10:02.678: INFO: Pod "downwardapi-volume-570ea5e7-2005-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.026824373s
STEP: Saw pod success
Dec 16 13:10:02.679: INFO: Pod "downwardapi-volume-570ea5e7-2005-11ea-9388-0242ac110004" satisfied condition "success or failure"
Dec 16 13:10:02.688: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-570ea5e7-2005-11ea-9388-0242ac110004 container client-container: 
STEP: delete the pod
Dec 16 13:10:02.798: INFO: Waiting for pod downwardapi-volume-570ea5e7-2005-11ea-9388-0242ac110004 to disappear
Dec 16 13:10:02.904: INFO: Pod downwardapi-volume-570ea5e7-2005-11ea-9388-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 13:10:02.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-pph5f" for this suite.
Dec 16 13:10:08.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 13:10:09.126: INFO: namespace: e2e-tests-projected-pph5f, resource: bindings, ignored listing per whitelist
Dec 16 13:10:09.228: INFO: namespace e2e-tests-projected-pph5f deletion completed in 6.296002081s

• [SLOW TEST:20.108 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 13:10:09.228: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-62e26645-2005-11ea-9388-0242ac110004
STEP: Creating a pod to test consume configMaps
Dec 16 13:10:09.488: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-62ea27b0-2005-11ea-9388-0242ac110004" in namespace "e2e-tests-projected-prvd7" to be "success or failure"
Dec 16 13:10:09.501: INFO: Pod "pod-projected-configmaps-62ea27b0-2005-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 12.735982ms
Dec 16 13:10:11.575: INFO: Pod "pod-projected-configmaps-62ea27b0-2005-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087313328s
Dec 16 13:10:13.604: INFO: Pod "pod-projected-configmaps-62ea27b0-2005-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115771998s
Dec 16 13:10:15.631: INFO: Pod "pod-projected-configmaps-62ea27b0-2005-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.143123547s
Dec 16 13:10:17.911: INFO: Pod "pod-projected-configmaps-62ea27b0-2005-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.422763634s
Dec 16 13:10:19.941: INFO: Pod "pod-projected-configmaps-62ea27b0-2005-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.453502217s
Dec 16 13:10:21.965: INFO: Pod "pod-projected-configmaps-62ea27b0-2005-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.477203928s
STEP: Saw pod success
Dec 16 13:10:21.965: INFO: Pod "pod-projected-configmaps-62ea27b0-2005-11ea-9388-0242ac110004" satisfied condition "success or failure"
Dec 16 13:10:21.973: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-62ea27b0-2005-11ea-9388-0242ac110004 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 16 13:10:24.409: INFO: Waiting for pod pod-projected-configmaps-62ea27b0-2005-11ea-9388-0242ac110004 to disappear
Dec 16 13:10:24.552: INFO: Pod pod-projected-configmaps-62ea27b0-2005-11ea-9388-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 13:10:24.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-prvd7" for this suite.
Dec 16 13:10:32.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 13:10:32.983: INFO: namespace: e2e-tests-projected-prvd7, resource: bindings, ignored listing per whitelist
Dec 16 13:10:33.110: INFO: namespace e2e-tests-projected-prvd7 deletion completed in 8.533682842s

• [SLOW TEST:23.882 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 13:10:33.110: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 16 13:10:33.367: INFO: Waiting up to 5m0s for pod "pod-712562da-2005-11ea-9388-0242ac110004" in namespace "e2e-tests-emptydir-2qn6s" to be "success or failure"
Dec 16 13:10:33.508: INFO: Pod "pod-712562da-2005-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 140.775287ms
Dec 16 13:10:36.863: INFO: Pod "pod-712562da-2005-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 3.495602614s
Dec 16 13:10:39.066: INFO: Pod "pod-712562da-2005-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 5.698854686s
Dec 16 13:10:41.083: INFO: Pod "pod-712562da-2005-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.715308763s
Dec 16 13:10:43.156: INFO: Pod "pod-712562da-2005-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.788248282s
Dec 16 13:10:45.176: INFO: Pod "pod-712562da-2005-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.808766281s
Dec 16 13:10:47.190: INFO: Pod "pod-712562da-2005-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.822116216s
STEP: Saw pod success
Dec 16 13:10:47.190: INFO: Pod "pod-712562da-2005-11ea-9388-0242ac110004" satisfied condition "success or failure"
Dec 16 13:10:47.287: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-712562da-2005-11ea-9388-0242ac110004 container test-container: 
STEP: delete the pod
Dec 16 13:10:47.563: INFO: Waiting for pod pod-712562da-2005-11ea-9388-0242ac110004 to disappear
Dec 16 13:10:47.770: INFO: Pod pod-712562da-2005-11ea-9388-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 13:10:47.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-2qn6s" for this suite.
Dec 16 13:10:53.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 13:10:54.274: INFO: namespace: e2e-tests-emptydir-2qn6s, resource: bindings, ignored listing per whitelist
Dec 16 13:10:54.413: INFO: namespace e2e-tests-emptydir-2qn6s deletion completed in 6.617535229s

• [SLOW TEST:21.303 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 13:10:54.414: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Dec 16 13:10:54.757: INFO: Waiting up to 5m0s for pod "pod-7de64574-2005-11ea-9388-0242ac110004" in namespace "e2e-tests-emptydir-hqcv7" to be "success or failure"
Dec 16 13:10:54.767: INFO: Pod "pod-7de64574-2005-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.903793ms
Dec 16 13:10:57.218: INFO: Pod "pod-7de64574-2005-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.460009559s
Dec 16 13:10:59.258: INFO: Pod "pod-7de64574-2005-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.500167145s
Dec 16 13:11:01.989: INFO: Pod "pod-7de64574-2005-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.230849758s
Dec 16 13:11:04.003: INFO: Pod "pod-7de64574-2005-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.245542231s
Dec 16 13:11:06.273: INFO: Pod "pod-7de64574-2005-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.514965246s
Dec 16 13:11:08.292: INFO: Pod "pod-7de64574-2005-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.533946459s
STEP: Saw pod success
Dec 16 13:11:08.292: INFO: Pod "pod-7de64574-2005-11ea-9388-0242ac110004" satisfied condition "success or failure"
Dec 16 13:11:08.298: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-7de64574-2005-11ea-9388-0242ac110004 container test-container: 
STEP: delete the pod
Dec 16 13:11:08.975: INFO: Waiting for pod pod-7de64574-2005-11ea-9388-0242ac110004 to disappear
Dec 16 13:11:09.270: INFO: Pod pod-7de64574-2005-11ea-9388-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 13:11:09.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-hqcv7" for this suite.
Dec 16 13:11:15.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 13:11:15.424: INFO: namespace: e2e-tests-emptydir-hqcv7, resource: bindings, ignored listing per whitelist
Dec 16 13:11:15.601: INFO: namespace e2e-tests-emptydir-hqcv7 deletion completed in 6.313326142s

• [SLOW TEST:21.187 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 13:11:15.601: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
STEP: creating an rc
Dec 16 13:11:15.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-j275s'
Dec 16 13:11:18.201: INFO: stderr: ""
Dec 16 13:11:18.201: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Waiting for Redis master to start.
Dec 16 13:11:20.118: INFO: Selector matched 1 pods for map[app:redis]
Dec 16 13:11:20.118: INFO: Found 0 / 1
Dec 16 13:11:20.532: INFO: Selector matched 1 pods for map[app:redis]
Dec 16 13:11:20.532: INFO: Found 0 / 1
Dec 16 13:11:21.220: INFO: Selector matched 1 pods for map[app:redis]
Dec 16 13:11:21.221: INFO: Found 0 / 1
Dec 16 13:11:22.267: INFO: Selector matched 1 pods for map[app:redis]
Dec 16 13:11:22.267: INFO: Found 0 / 1
Dec 16 13:11:23.215: INFO: Selector matched 1 pods for map[app:redis]
Dec 16 13:11:23.216: INFO: Found 0 / 1
Dec 16 13:11:24.226: INFO: Selector matched 1 pods for map[app:redis]
Dec 16 13:11:24.226: INFO: Found 0 / 1
Dec 16 13:11:26.407: INFO: Selector matched 1 pods for map[app:redis]
Dec 16 13:11:26.407: INFO: Found 0 / 1
Dec 16 13:11:28.073: INFO: Selector matched 1 pods for map[app:redis]
Dec 16 13:11:28.073: INFO: Found 0 / 1
Dec 16 13:11:28.431: INFO: Selector matched 1 pods for map[app:redis]
Dec 16 13:11:28.431: INFO: Found 0 / 1
Dec 16 13:11:29.331: INFO: Selector matched 1 pods for map[app:redis]
Dec 16 13:11:29.332: INFO: Found 0 / 1
Dec 16 13:11:30.257: INFO: Selector matched 1 pods for map[app:redis]
Dec 16 13:11:30.258: INFO: Found 0 / 1
Dec 16 13:11:31.212: INFO: Selector matched 1 pods for map[app:redis]
Dec 16 13:11:31.212: INFO: Found 0 / 1
Dec 16 13:11:32.228: INFO: Selector matched 1 pods for map[app:redis]
Dec 16 13:11:32.228: INFO: Found 1 / 1
Dec 16 13:11:32.228: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 16 13:11:32.392: INFO: Selector matched 1 pods for map[app:redis]
Dec 16 13:11:32.392: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Dec 16 13:11:32.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-gklp6 redis-master --namespace=e2e-tests-kubectl-j275s'
Dec 16 13:11:32.698: INFO: stderr: ""
Dec 16 13:11:32.698: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 16 Dec 13:11:30.713 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 16 Dec 13:11:30.713 # Server started, Redis version 3.2.12\n1:M 16 Dec 13:11:30.713 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 16 Dec 13:11:30.713 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Dec 16 13:11:32.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-gklp6 redis-master --namespace=e2e-tests-kubectl-j275s --tail=1'
Dec 16 13:11:32.950: INFO: stderr: ""
Dec 16 13:11:32.951: INFO: stdout: "1:M 16 Dec 13:11:30.713 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Dec 16 13:11:32.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-gklp6 redis-master --namespace=e2e-tests-kubectl-j275s --limit-bytes=1'
Dec 16 13:11:33.212: INFO: stderr: ""
Dec 16 13:11:33.212: INFO: stdout: " "
STEP: exposing timestamps
Dec 16 13:11:33.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-gklp6 redis-master --namespace=e2e-tests-kubectl-j275s --tail=1 --timestamps'
Dec 16 13:11:33.346: INFO: stderr: ""
Dec 16 13:11:33.346: INFO: stdout: "2019-12-16T13:11:30.71423901Z 1:M 16 Dec 13:11:30.713 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Dec 16 13:11:35.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-gklp6 redis-master --namespace=e2e-tests-kubectl-j275s --since=1s'
Dec 16 13:11:36.069: INFO: stderr: ""
Dec 16 13:11:36.069: INFO: stdout: ""
Dec 16 13:11:36.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-gklp6 redis-master --namespace=e2e-tests-kubectl-j275s --since=24h'
Dec 16 13:11:36.203: INFO: stderr: ""
Dec 16 13:11:36.204: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 16 Dec 13:11:30.713 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 16 Dec 13:11:30.713 # Server started, Redis version 3.2.12\n1:M 16 Dec 13:11:30.713 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 16 Dec 13:11:30.713 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140
STEP: using delete to clean up resources
Dec 16 13:11:36.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-j275s'
Dec 16 13:11:36.375: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 16 13:11:36.376: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Dec 16 13:11:36.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-j275s'
Dec 16 13:11:36.554: INFO: stderr: "No resources found.\n"
Dec 16 13:11:36.555: INFO: stdout: ""
Dec 16 13:11:36.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-j275s -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 16 13:11:36.866: INFO: stderr: ""
Dec 16 13:11:36.866: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 13:11:36.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-j275s" for this suite.
Dec 16 13:12:01.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 13:12:01.202: INFO: namespace: e2e-tests-kubectl-j275s, resource: bindings, ignored listing per whitelist
Dec 16 13:12:01.381: INFO: namespace e2e-tests-kubectl-j275s deletion completed in 24.465478327s

• [SLOW TEST:45.781 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 13:12:01.383: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-2j6wr A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-2j6wr;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-2j6wr A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-2j6wr;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-2j6wr.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-2j6wr.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-2j6wr.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-2j6wr.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-2j6wr.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-2j6wr.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-2j6wr.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-2j6wr.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-2j6wr.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-2j6wr.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-2j6wr.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-2j6wr.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-2j6wr.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 197.78.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.78.197_udp@PTR;check="$$(dig +tcp +noall +answer +search 197.78.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.78.197_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-2j6wr A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-2j6wr;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-2j6wr A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-2j6wr;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-2j6wr.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-2j6wr.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-2j6wr.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-2j6wr.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-2j6wr.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-2j6wr.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-2j6wr.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-2j6wr.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-2j6wr.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-2j6wr.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-2j6wr.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-2j6wr.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-2j6wr.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 197.78.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.78.197_udp@PTR;check="$$(dig +tcp +noall +answer +search 197.78.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.78.197_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 16 13:12:24.139: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-2j6wr/dns-test-a5f10506-2005-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-a5f10506-2005-11ea-9388-0242ac110004)
Dec 16 13:12:24.147: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-2j6wr/dns-test-a5f10506-2005-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-a5f10506-2005-11ea-9388-0242ac110004)
Dec 16 13:12:24.155: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-2j6wr from pod e2e-tests-dns-2j6wr/dns-test-a5f10506-2005-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-a5f10506-2005-11ea-9388-0242ac110004)
Dec 16 13:12:24.163: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-2j6wr from pod e2e-tests-dns-2j6wr/dns-test-a5f10506-2005-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-a5f10506-2005-11ea-9388-0242ac110004)
Dec 16 13:12:24.169: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-2j6wr.svc from pod e2e-tests-dns-2j6wr/dns-test-a5f10506-2005-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-a5f10506-2005-11ea-9388-0242ac110004)
Dec 16 13:12:24.186: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-2j6wr.svc from pod e2e-tests-dns-2j6wr/dns-test-a5f10506-2005-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-a5f10506-2005-11ea-9388-0242ac110004)
Dec 16 13:12:24.196: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-2j6wr.svc from pod e2e-tests-dns-2j6wr/dns-test-a5f10506-2005-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-a5f10506-2005-11ea-9388-0242ac110004)
Dec 16 13:12:24.211: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-2j6wr.svc from pod e2e-tests-dns-2j6wr/dns-test-a5f10506-2005-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-a5f10506-2005-11ea-9388-0242ac110004)
Dec 16 13:12:24.234: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-2j6wr.svc from pod e2e-tests-dns-2j6wr/dns-test-a5f10506-2005-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-a5f10506-2005-11ea-9388-0242ac110004)
Dec 16 13:12:24.266: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-2j6wr.svc from pod e2e-tests-dns-2j6wr/dns-test-a5f10506-2005-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-a5f10506-2005-11ea-9388-0242ac110004)
Dec 16 13:12:24.294: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-2j6wr/dns-test-a5f10506-2005-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-a5f10506-2005-11ea-9388-0242ac110004)
Dec 16 13:12:24.341: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-2j6wr/dns-test-a5f10506-2005-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-a5f10506-2005-11ea-9388-0242ac110004)
Dec 16 13:12:24.355: INFO: Unable to read 10.104.78.197_udp@PTR from pod e2e-tests-dns-2j6wr/dns-test-a5f10506-2005-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-a5f10506-2005-11ea-9388-0242ac110004)
Dec 16 13:12:24.367: INFO: Unable to read 10.104.78.197_tcp@PTR from pod e2e-tests-dns-2j6wr/dns-test-a5f10506-2005-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-a5f10506-2005-11ea-9388-0242ac110004)
Dec 16 13:12:24.374: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-2j6wr/dns-test-a5f10506-2005-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-a5f10506-2005-11ea-9388-0242ac110004)
Dec 16 13:12:24.380: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-2j6wr/dns-test-a5f10506-2005-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-a5f10506-2005-11ea-9388-0242ac110004)
Dec 16 13:12:24.385: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-2j6wr from pod e2e-tests-dns-2j6wr/dns-test-a5f10506-2005-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-a5f10506-2005-11ea-9388-0242ac110004)
Dec 16 13:12:24.389: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-2j6wr from pod e2e-tests-dns-2j6wr/dns-test-a5f10506-2005-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-a5f10506-2005-11ea-9388-0242ac110004)
Dec 16 13:12:24.396: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-2j6wr.svc from pod e2e-tests-dns-2j6wr/dns-test-a5f10506-2005-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-a5f10506-2005-11ea-9388-0242ac110004)
Dec 16 13:12:24.407: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-2j6wr.svc from pod e2e-tests-dns-2j6wr/dns-test-a5f10506-2005-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-a5f10506-2005-11ea-9388-0242ac110004)
Dec 16 13:12:24.416: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-2j6wr.svc from pod e2e-tests-dns-2j6wr/dns-test-a5f10506-2005-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-a5f10506-2005-11ea-9388-0242ac110004)
Dec 16 13:12:24.421: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-2j6wr.svc from pod e2e-tests-dns-2j6wr/dns-test-a5f10506-2005-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-a5f10506-2005-11ea-9388-0242ac110004)
Dec 16 13:12:24.425: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-2j6wr.svc from pod e2e-tests-dns-2j6wr/dns-test-a5f10506-2005-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-a5f10506-2005-11ea-9388-0242ac110004)
Dec 16 13:12:24.429: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-2j6wr.svc from pod e2e-tests-dns-2j6wr/dns-test-a5f10506-2005-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-a5f10506-2005-11ea-9388-0242ac110004)
Dec 16 13:12:24.446: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-2j6wr/dns-test-a5f10506-2005-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-a5f10506-2005-11ea-9388-0242ac110004)
Dec 16 13:12:24.458: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-2j6wr/dns-test-a5f10506-2005-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-a5f10506-2005-11ea-9388-0242ac110004)
Dec 16 13:12:24.499: INFO: Unable to read 10.104.78.197_udp@PTR from pod e2e-tests-dns-2j6wr/dns-test-a5f10506-2005-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-a5f10506-2005-11ea-9388-0242ac110004)
Dec 16 13:12:24.522: INFO: Unable to read 10.104.78.197_tcp@PTR from pod e2e-tests-dns-2j6wr/dns-test-a5f10506-2005-11ea-9388-0242ac110004: the server could not find the requested resource (get pods dns-test-a5f10506-2005-11ea-9388-0242ac110004)
Dec 16 13:12:24.522: INFO: Lookups using e2e-tests-dns-2j6wr/dns-test-a5f10506-2005-11ea-9388-0242ac110004 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-2j6wr wheezy_tcp@dns-test-service.e2e-tests-dns-2j6wr wheezy_udp@dns-test-service.e2e-tests-dns-2j6wr.svc wheezy_tcp@dns-test-service.e2e-tests-dns-2j6wr.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-2j6wr.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-2j6wr.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-2j6wr.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-2j6wr.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.104.78.197_udp@PTR 10.104.78.197_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-2j6wr jessie_tcp@dns-test-service.e2e-tests-dns-2j6wr jessie_udp@dns-test-service.e2e-tests-dns-2j6wr.svc jessie_tcp@dns-test-service.e2e-tests-dns-2j6wr.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-2j6wr.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-2j6wr.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-2j6wr.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-2j6wr.svc jessie_udp@PodARecord jessie_tcp@PodARecord 10.104.78.197_udp@PTR 10.104.78.197_tcp@PTR]

Dec 16 13:12:29.935: INFO: DNS probes using e2e-tests-dns-2j6wr/dns-test-a5f10506-2005-11ea-9388-0242ac110004 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 13:12:30.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-2j6wr" for this suite.
Dec 16 13:12:38.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 13:12:38.714: INFO: namespace: e2e-tests-dns-2j6wr, resource: bindings, ignored listing per whitelist
Dec 16 13:12:38.955: INFO: namespace e2e-tests-dns-2j6wr deletion completed in 8.3299916s

• [SLOW TEST:37.573 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 13:12:38.955: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 16 13:12:39.479: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bc50e495-2005-11ea-9388-0242ac110004" in namespace "e2e-tests-downward-api-brm54" to be "success or failure"
Dec 16 13:12:39.654: INFO: Pod "downwardapi-volume-bc50e495-2005-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 175.189344ms
Dec 16 13:12:42.703: INFO: Pod "downwardapi-volume-bc50e495-2005-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 3.223668203s
Dec 16 13:12:44.726: INFO: Pod "downwardapi-volume-bc50e495-2005-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 5.246738766s
Dec 16 13:12:47.664: INFO: Pod "downwardapi-volume-bc50e495-2005-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.184366949s
Dec 16 13:12:49.688: INFO: Pod "downwardapi-volume-bc50e495-2005-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.208679705s
Dec 16 13:12:51.701: INFO: Pod "downwardapi-volume-bc50e495-2005-11ea-9388-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 12.221791896s
Dec 16 13:12:53.733: INFO: Pod "downwardapi-volume-bc50e495-2005-11ea-9388-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.253733281s
STEP: Saw pod success
Dec 16 13:12:53.733: INFO: Pod "downwardapi-volume-bc50e495-2005-11ea-9388-0242ac110004" satisfied condition "success or failure"
Dec 16 13:12:53.759: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-bc50e495-2005-11ea-9388-0242ac110004 container client-container: 
STEP: delete the pod
Dec 16 13:12:54.042: INFO: Waiting for pod downwardapi-volume-bc50e495-2005-11ea-9388-0242ac110004 to disappear
Dec 16 13:12:54.062: INFO: Pod downwardapi-volume-bc50e495-2005-11ea-9388-0242ac110004 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 13:12:54.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-brm54" for this suite.
Dec 16 13:13:00.151: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 13:13:00.358: INFO: namespace: e2e-tests-downward-api-brm54, resource: bindings, ignored listing per whitelist
Dec 16 13:13:00.365: INFO: namespace e2e-tests-downward-api-brm54 deletion completed in 6.292268089s

• [SLOW TEST:21.410 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 13:13:00.366: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 16 13:13:00.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-lrld5'
Dec 16 13:13:00.773: INFO: stderr: ""
Dec 16 13:13:00.774: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Dec 16 13:13:10.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-lrld5 -o json'
Dec 16 13:13:11.029: INFO: stderr: ""
Dec 16 13:13:11.030: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2019-12-16T13:13:00Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"e2e-tests-kubectl-lrld5\",\n        \"resourceVersion\": \"15017946\",\n        \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-lrld5/pods/e2e-test-nginx-pod\",\n        \"uid\": \"c9000609-2005-11ea-a994-fa163e34d433\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-csft7\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-csft7\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-csft7\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-16T13:13:00Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-16T13:13:09Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-16T13:13:09Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-16T13:13:00Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://957d8ee3e30f479f6cc42102b4373bd8327ae6d3f8782d090956b19c029edeb9\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2019-12-16T13:13:09Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.1.240\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.32.0.4\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2019-12-16T13:13:00Z\"\n    }\n}\n"
STEP: replace the image in the pod
Dec 16 13:13:11.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-lrld5'
Dec 16 13:13:11.471: INFO: stderr: ""
Dec 16 13:13:11.471: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568
Dec 16 13:13:11.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-lrld5'
Dec 16 13:13:21.840: INFO: stderr: ""
Dec 16 13:13:21.841: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 13:13:21.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-lrld5" for this suite.
Dec 16 13:13:28.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 13:13:28.194: INFO: namespace: e2e-tests-kubectl-lrld5, resource: bindings, ignored listing per whitelist
Dec 16 13:13:28.242: INFO: namespace e2e-tests-kubectl-lrld5 deletion completed in 6.359019368s

• [SLOW TEST:27.876 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 13:13:28.242: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W1216 13:14:21.463029       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 16 13:14:21.463: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 13:14:21.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-glv4s" for this suite.
Dec 16 13:14:30.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 13:14:36.554: INFO: namespace: e2e-tests-gc-glv4s, resource: bindings, ignored listing per whitelist
Dec 16 13:14:36.628: INFO: namespace e2e-tests-gc-glv4s deletion completed in 14.556380404s

• [SLOW TEST:68.386 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 13:14:36.629: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Dec 16 13:15:08.612: INFO: Successfully updated pod "pod-update-02d11472-2006-11ea-9388-0242ac110004"
STEP: verifying the updated pod is in kubernetes
Dec 16 13:15:08.654: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 13:15:08.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-2qvdd" for this suite.
Dec 16 13:15:32.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 13:15:32.969: INFO: namespace: e2e-tests-pods-2qvdd, resource: bindings, ignored listing per whitelist
Dec 16 13:15:33.010: INFO: namespace e2e-tests-pods-2qvdd deletion completed in 24.335728659s

• [SLOW TEST:56.381 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 16 13:15:33.010: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-pp457
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 16 13:15:33.206: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 16 13:16:17.595: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-pp457 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 16 13:16:17.595: INFO: >>> kubeConfig: /root/.kube/config
Dec 16 13:16:18.345: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 16 13:16:18.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-pp457" for this suite.
Dec 16 13:16:42.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 16 13:16:42.449: INFO: namespace: e2e-tests-pod-network-test-pp457, resource: bindings, ignored listing per whitelist
Dec 16 13:16:42.657: INFO: namespace e2e-tests-pod-network-test-pp457 deletion completed in 24.298731547s

• [SLOW TEST:69.647 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSDec 16 13:16:42.657: INFO: Running AfterSuite actions on all nodes
Dec 16 13:16:42.658: INFO: Running AfterSuite actions on node 1
Dec 16 13:16:42.658: INFO: Skipping dumping logs from cluster

Ran 199 of 2164 Specs in 8965.857 seconds
SUCCESS! -- 199 Passed | 0 Failed | 0 Pending | 1965 Skipped PASS