I1228 10:47:04.391990 8 e2e.go:224] Starting e2e run "621b5424-295f-11ea-8e71-0242ac110005" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1577530023 - Will randomize all specs Will run 201 of 2164 specs Dec 28 10:47:04.656: INFO: >>> kubeConfig: /root/.kube/config Dec 28 10:47:04.658: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Dec 28 10:47:04.679: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Dec 28 10:47:04.711: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Dec 28 10:47:04.711: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Dec 28 10:47:04.711: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Dec 28 10:47:04.720: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Dec 28 10:47:04.720: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Dec 28 10:47:04.720: INFO: e2e test version: v1.13.12 Dec 28 10:47:04.721: INFO: kube-apiserver version: v1.13.8 SSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 28 10:47:04.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets Dec 28 10:47:04.883: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 28 10:47:04.908: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 Dec 28 10:47:04.918: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-vz6c7/daemonsets","resourceVersion":"16335488"},"items":null} Dec 28 10:47:04.968: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-vz6c7/pods","resourceVersion":"16335488"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 28 10:47:04.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-vz6c7" for this suite. Dec 28 10:47:11.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 10:47:11.131: INFO: namespace: e2e-tests-daemonsets-vz6c7, resource: bindings, ignored listing per whitelist Dec 28 10:47:11.146: INFO: namespace e2e-tests-daemonsets-vz6c7 deletion completed in 6.162372027s S [SKIPPING] [6.425 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 28 10:47:04.908: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 28 10:47:11.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-fcgvj Dec 28 10:47:25.688: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-fcgvj STEP: checking the pod's current state and verifying that restartCount is present Dec 28 10:47:25.692: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 28 10:51:27.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-fcgvj" for this suite. Dec 28 10:51:33.346: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 10:51:33.493: INFO: namespace: e2e-tests-container-probe-fcgvj, resource: bindings, ignored listing per whitelist Dec 28 10:51:33.498: INFO: namespace e2e-tests-container-probe-fcgvj deletion completed in 6.21951828s • [SLOW TEST:262.352 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 28 10:51:33.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 28 10:51:33.734: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0349f424-2960-11ea-8e71-0242ac110005" in namespace "e2e-tests-downward-api-kh9pc" to be "success or failure" Dec 28 10:51:33.802: INFO: Pod "downwardapi-volume-0349f424-2960-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 67.980174ms Dec 28 10:51:36.037: INFO: Pod "downwardapi-volume-0349f424-2960-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.302222431s Dec 28 10:51:38.069: INFO: Pod "downwardapi-volume-0349f424-2960-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.334699234s Dec 28 10:51:40.084: INFO: Pod "downwardapi-volume-0349f424-2960-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.349108166s Dec 28 10:51:42.109: INFO: Pod "downwardapi-volume-0349f424-2960-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.374385753s Dec 28 10:51:44.124: INFO: Pod "downwardapi-volume-0349f424-2960-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.389736963s STEP: Saw pod success Dec 28 10:51:44.124: INFO: Pod "downwardapi-volume-0349f424-2960-11ea-8e71-0242ac110005" satisfied condition "success or failure" Dec 28 10:51:44.130: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-0349f424-2960-11ea-8e71-0242ac110005 container client-container: STEP: delete the pod Dec 28 10:51:44.235: INFO: Waiting for pod downwardapi-volume-0349f424-2960-11ea-8e71-0242ac110005 to disappear Dec 28 10:51:44.265: INFO: Pod downwardapi-volume-0349f424-2960-11ea-8e71-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 28 10:51:44.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-kh9pc" for this suite. Dec 28 10:51:50.405: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 10:51:50.431: INFO: namespace: e2e-tests-downward-api-kh9pc, resource: bindings, ignored listing per whitelist Dec 28 10:51:50.664: INFO: namespace e2e-tests-downward-api-kh9pc deletion completed in 6.389492129s • [SLOW TEST:17.166 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 28 10:51:50.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token Dec 28 10:51:51.505: INFO: Waiting up to 5m0s for pod "pod-service-account-0de1643d-2960-11ea-8e71-0242ac110005-82sm9" in namespace "e2e-tests-svcaccounts-m7fzt" to be "success or failure" Dec 28 10:51:51.524: INFO: Pod "pod-service-account-0de1643d-2960-11ea-8e71-0242ac110005-82sm9": Phase="Pending", Reason="", readiness=false. Elapsed: 19.4352ms Dec 28 10:51:53.704: INFO: Pod "pod-service-account-0de1643d-2960-11ea-8e71-0242ac110005-82sm9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.198708699s Dec 28 10:51:55.722: INFO: Pod "pod-service-account-0de1643d-2960-11ea-8e71-0242ac110005-82sm9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.216598207s Dec 28 10:51:58.035: INFO: Pod "pod-service-account-0de1643d-2960-11ea-8e71-0242ac110005-82sm9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.529830899s Dec 28 10:52:00.056: INFO: Pod "pod-service-account-0de1643d-2960-11ea-8e71-0242ac110005-82sm9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.550585281s Dec 28 10:52:02.068: INFO: Pod "pod-service-account-0de1643d-2960-11ea-8e71-0242ac110005-82sm9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.563490711s Dec 28 10:52:04.088: INFO: Pod "pod-service-account-0de1643d-2960-11ea-8e71-0242ac110005-82sm9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.582795187s Dec 28 10:52:06.192: INFO: Pod "pod-service-account-0de1643d-2960-11ea-8e71-0242ac110005-82sm9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.686883822s Dec 28 10:52:08.621: INFO: Pod "pod-service-account-0de1643d-2960-11ea-8e71-0242ac110005-82sm9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.116044015s STEP: Saw pod success Dec 28 10:52:08.621: INFO: Pod "pod-service-account-0de1643d-2960-11ea-8e71-0242ac110005-82sm9" satisfied condition "success or failure" Dec 28 10:52:08.630: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-0de1643d-2960-11ea-8e71-0242ac110005-82sm9 container token-test: STEP: delete the pod Dec 28 10:52:09.456: INFO: Waiting for pod pod-service-account-0de1643d-2960-11ea-8e71-0242ac110005-82sm9 to disappear Dec 28 10:52:09.483: INFO: Pod pod-service-account-0de1643d-2960-11ea-8e71-0242ac110005-82sm9 no longer exists STEP: Creating a pod to test consume service account root CA Dec 28 10:52:09.709: INFO: Waiting up to 5m0s for pod "pod-service-account-0de1643d-2960-11ea-8e71-0242ac110005-lhbds" in namespace "e2e-tests-svcaccounts-m7fzt" to be "success or failure" Dec 28 10:52:09.749: INFO: Pod "pod-service-account-0de1643d-2960-11ea-8e71-0242ac110005-lhbds": Phase="Pending", Reason="", readiness=false. Elapsed: 40.413533ms Dec 28 10:52:11.807: INFO: Pod "pod-service-account-0de1643d-2960-11ea-8e71-0242ac110005-lhbds": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097710069s Dec 28 10:52:13.864: INFO: Pod "pod-service-account-0de1643d-2960-11ea-8e71-0242ac110005-lhbds": Phase="Pending", Reason="", readiness=false. Elapsed: 4.154729732s Dec 28 10:52:16.008: INFO: Pod "pod-service-account-0de1643d-2960-11ea-8e71-0242ac110005-lhbds": Phase="Pending", Reason="", readiness=false. Elapsed: 6.299470536s Dec 28 10:52:19.896: INFO: Pod "pod-service-account-0de1643d-2960-11ea-8e71-0242ac110005-lhbds": Phase="Pending", Reason="", readiness=false. Elapsed: 10.187299439s Dec 28 10:52:22.812: INFO: Pod "pod-service-account-0de1643d-2960-11ea-8e71-0242ac110005-lhbds": Phase="Pending", Reason="", readiness=false. Elapsed: 13.102730511s Dec 28 10:52:24.821: INFO: Pod "pod-service-account-0de1643d-2960-11ea-8e71-0242ac110005-lhbds": Phase="Pending", Reason="", readiness=false. Elapsed: 15.112392218s Dec 28 10:52:26.839: INFO: Pod "pod-service-account-0de1643d-2960-11ea-8e71-0242ac110005-lhbds": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.130403128s STEP: Saw pod success Dec 28 10:52:26.839: INFO: Pod "pod-service-account-0de1643d-2960-11ea-8e71-0242ac110005-lhbds" satisfied condition "success or failure" Dec 28 10:52:26.860: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-0de1643d-2960-11ea-8e71-0242ac110005-lhbds container root-ca-test: STEP: delete the pod Dec 28 10:52:28.267: INFO: Waiting for pod pod-service-account-0de1643d-2960-11ea-8e71-0242ac110005-lhbds to disappear Dec 28 10:52:28.285: INFO: Pod pod-service-account-0de1643d-2960-11ea-8e71-0242ac110005-lhbds no longer exists STEP: Creating a pod to test consume service account namespace Dec 28 10:52:28.331: INFO: Waiting up to 5m0s for pod "pod-service-account-0de1643d-2960-11ea-8e71-0242ac110005-z42j2" in namespace "e2e-tests-svcaccounts-m7fzt" to be "success or failure" Dec 28 10:52:28.404: INFO: Pod "pod-service-account-0de1643d-2960-11ea-8e71-0242ac110005-z42j2": Phase="Pending", Reason="", readiness=false. Elapsed: 72.848394ms Dec 28 10:52:30.686: INFO: Pod "pod-service-account-0de1643d-2960-11ea-8e71-0242ac110005-z42j2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.354818697s Dec 28 10:52:32.708: INFO: Pod "pod-service-account-0de1643d-2960-11ea-8e71-0242ac110005-z42j2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.37678204s Dec 28 10:52:35.133: INFO: Pod "pod-service-account-0de1643d-2960-11ea-8e71-0242ac110005-z42j2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.802394036s Dec 28 10:52:37.148: INFO: Pod "pod-service-account-0de1643d-2960-11ea-8e71-0242ac110005-z42j2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.817063444s Dec 28 10:52:39.159: INFO: Pod "pod-service-account-0de1643d-2960-11ea-8e71-0242ac110005-z42j2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.828312127s Dec 28 10:52:41.219: INFO: Pod "pod-service-account-0de1643d-2960-11ea-8e71-0242ac110005-z42j2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.888488227s Dec 28 10:52:43.244: INFO: Pod "pod-service-account-0de1643d-2960-11ea-8e71-0242ac110005-z42j2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.912743405s Dec 28 10:52:45.260: INFO: Pod "pod-service-account-0de1643d-2960-11ea-8e71-0242ac110005-z42j2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.929301975s STEP: Saw pod success Dec 28 10:52:45.260: INFO: Pod "pod-service-account-0de1643d-2960-11ea-8e71-0242ac110005-z42j2" satisfied condition "success or failure" Dec 28 10:52:45.267: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-0de1643d-2960-11ea-8e71-0242ac110005-z42j2 container namespace-test: STEP: delete the pod Dec 28 10:52:45.355: INFO: Waiting for pod pod-service-account-0de1643d-2960-11ea-8e71-0242ac110005-z42j2 to disappear Dec 28 10:52:45.363: INFO: Pod pod-service-account-0de1643d-2960-11ea-8e71-0242ac110005-z42j2 no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 28 10:52:45.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-m7fzt" for this suite. Dec 28 10:52:53.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 10:52:53.586: INFO: namespace: e2e-tests-svcaccounts-m7fzt, resource: bindings, ignored listing per whitelist Dec 28 10:52:53.666: INFO: namespace e2e-tests-svcaccounts-m7fzt deletion completed in 8.234948731s • [SLOW TEST:63.001 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 28 10:52:53.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Dec 28 10:52:54.046: INFO: Waiting up to 5m0s for pod "downward-api-331b3251-2960-11ea-8e71-0242ac110005" in namespace "e2e-tests-downward-api-pwfck" to be "success or failure" Dec 28 10:52:54.089: INFO: Pod "downward-api-331b3251-2960-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 43.047638ms Dec 28 10:52:56.102: INFO: Pod "downward-api-331b3251-2960-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055750095s Dec 28 10:52:58.129: INFO: Pod "downward-api-331b3251-2960-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082239984s Dec 28 10:53:00.606: INFO: Pod "downward-api-331b3251-2960-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.559758489s Dec 28 10:53:02.678: INFO: Pod "downward-api-331b3251-2960-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.63142872s Dec 28 10:53:04.693: INFO: Pod "downward-api-331b3251-2960-11ea-8e71-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.646661832s Dec 28 10:53:06.793: INFO: Pod "downward-api-331b3251-2960-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.746705779s STEP: Saw pod success Dec 28 10:53:06.793: INFO: Pod "downward-api-331b3251-2960-11ea-8e71-0242ac110005" satisfied condition "success or failure" Dec 28 10:53:07.219: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-331b3251-2960-11ea-8e71-0242ac110005 container dapi-container: STEP: delete the pod Dec 28 10:53:07.621: INFO: Waiting for pod downward-api-331b3251-2960-11ea-8e71-0242ac110005 to disappear Dec 28 10:53:07.631: INFO: Pod downward-api-331b3251-2960-11ea-8e71-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 28 10:53:07.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-pwfck" for this suite. Dec 28 10:53:13.808: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 10:53:13.904: INFO: namespace: e2e-tests-downward-api-pwfck, resource: bindings, ignored listing per whitelist Dec 28 10:53:14.011: INFO: namespace e2e-tests-downward-api-pwfck deletion completed in 6.368796237s • [SLOW TEST:20.345 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 28 10:53:14.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-8zbqv STEP: creating a selector STEP: Creating the service pods in kubernetes Dec 28 10:53:14.226: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Dec 28 10:53:52.895: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-8zbqv PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 28 10:53:52.895: INFO: >>> kubeConfig: /root/.kube/config Dec 28 10:53:53.348: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 28 10:53:53.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-8zbqv" for this suite. Dec 28 10:54:17.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 10:54:17.558: INFO: namespace: e2e-tests-pod-network-test-8zbqv, resource: bindings, ignored listing per whitelist Dec 28 10:54:17.647: INFO: namespace e2e-tests-pod-network-test-8zbqv deletion completed in 24.275295639s • [SLOW TEST:63.636 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 28 10:54:17.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 28 10:55:21.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-hnxfb" for this suite. Dec 28 10:55:30.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 10:55:30.206: INFO: namespace: e2e-tests-container-runtime-hnxfb, resource: bindings, ignored listing per whitelist Dec 28 10:55:30.236: INFO: namespace e2e-tests-container-runtime-hnxfb deletion completed in 8.245074116s • [SLOW TEST:72.589 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 28 10:55:30.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 28 10:55:30.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-r522t' Dec 28 10:55:32.571: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 28 10:55:32.571: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 Dec 28 10:55:34.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-r522t' Dec 28 10:55:35.570: INFO: stderr: "" Dec 28 10:55:35.570: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 28 10:55:35.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-r522t" for this suite. Dec 28 10:55:57.801: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 10:55:57.855: INFO: namespace: e2e-tests-kubectl-r522t, resource: bindings, ignored listing per whitelist Dec 28 10:55:57.972: INFO: namespace e2e-tests-kubectl-r522t deletion completed in 22.388446112s • [SLOW TEST:27.735 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 28 10:55:57.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Dec 28 10:55:58.219: INFO: Waiting up to 5m0s for pod "pod-a0f11abc-2960-11ea-8e71-0242ac110005" in namespace "e2e-tests-emptydir-q8n6x" to be "success or failure" Dec 28 10:55:58.234: INFO: Pod "pod-a0f11abc-2960-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.860134ms Dec 28 10:56:00.247: INFO: Pod "pod-a0f11abc-2960-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027304346s Dec 28 10:56:02.258: INFO: Pod "pod-a0f11abc-2960-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038246323s Dec 28 10:56:05.059: INFO: Pod "pod-a0f11abc-2960-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.839389854s Dec 28 10:56:07.085: INFO: Pod "pod-a0f11abc-2960-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.865729626s Dec 28 10:56:09.095: INFO: Pod "pod-a0f11abc-2960-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.875918539s STEP: Saw pod success Dec 28 10:56:09.095: INFO: Pod "pod-a0f11abc-2960-11ea-8e71-0242ac110005" satisfied condition "success or failure" Dec 28 10:56:09.100: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-a0f11abc-2960-11ea-8e71-0242ac110005 container test-container: STEP: delete the pod Dec 28 10:56:09.246: INFO: Waiting for pod pod-a0f11abc-2960-11ea-8e71-0242ac110005 to disappear Dec 28 10:56:09.266: INFO: Pod pod-a0f11abc-2960-11ea-8e71-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 28 10:56:09.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-q8n6x" for this suite. Dec 28 10:56:15.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 10:56:15.615: INFO: namespace: e2e-tests-emptydir-q8n6x, resource: bindings, ignored listing per whitelist Dec 28 10:56:15.643: INFO: namespace e2e-tests-emptydir-q8n6x deletion completed in 6.373943065s • [SLOW TEST:17.671 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 28 10:56:15.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 28 10:56:15.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-jr6mh' Dec 28 10:56:15.974: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 28 10:56:15.974: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Dec 28 10:56:18.023: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-559lz] Dec 28 10:56:18.023: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-559lz" in namespace "e2e-tests-kubectl-jr6mh" to be "running and ready" Dec 28 10:56:18.029: INFO: Pod "e2e-test-nginx-rc-559lz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01748ms Dec 28 10:56:20.053: INFO: Pod "e2e-test-nginx-rc-559lz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02992102s Dec 28 10:56:22.194: INFO: Pod "e2e-test-nginx-rc-559lz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.170861853s Dec 28 10:56:24.224: INFO: Pod "e2e-test-nginx-rc-559lz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.200802722s Dec 28 10:56:26.231: INFO: Pod "e2e-test-nginx-rc-559lz": Phase="Running", Reason="", readiness=true. Elapsed: 8.208175419s Dec 28 10:56:26.231: INFO: Pod "e2e-test-nginx-rc-559lz" satisfied condition "running and ready" Dec 28 10:56:26.231: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-559lz] Dec 28 10:56:26.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-jr6mh' Dec 28 10:56:26.496: INFO: stderr: "" Dec 28 10:56:26.496: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 Dec 28 10:56:26.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-jr6mh' Dec 28 10:56:26.713: INFO: stderr: "" Dec 28 10:56:26.713: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 28 10:56:26.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-jr6mh" for this suite. Dec 28 10:56:50.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 10:56:51.105: INFO: namespace: e2e-tests-kubectl-jr6mh, resource: bindings, ignored listing per whitelist Dec 28 10:56:51.155: INFO: namespace e2e-tests-kubectl-jr6mh deletion completed in 24.361861742s • [SLOW TEST:35.511 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 28 10:56:51.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 28 10:57:03.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-5xwzl" for this suite. Dec 28 10:57:47.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 10:57:47.608: INFO: namespace: e2e-tests-kubelet-test-5xwzl, resource: bindings, ignored listing per whitelist Dec 28 10:57:47.703: INFO: namespace e2e-tests-kubelet-test-5xwzl deletion completed in 44.242760589s • [SLOW TEST:56.548 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 28 10:57:47.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 28 10:57:47.948: INFO: Creating deployment "test-recreate-deployment" Dec 28 10:57:47.954: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Dec 28 10:57:47.981: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created Dec 28 10:57:50.006: INFO: Waiting deployment "test-recreate-deployment" to complete Dec 28 10:57:50.009: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713127468, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713127468, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713127468, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713127467, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 28 10:57:52.054: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713127468, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713127468, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713127468, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713127467, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 28 10:57:54.803: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713127468, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713127468, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713127468, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713127467, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 28 10:57:56.023: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713127468, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713127468, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713127468, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713127467, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 28 10:57:58.022: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Dec 28 10:57:58.038: INFO: Updating deployment test-recreate-deployment Dec 28 10:57:58.038: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Dec 28 10:57:58.985: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-rdfng,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rdfng/deployments/test-recreate-deployment,UID:e25c2d31-2960-11ea-a994-fa163e34d433,ResourceVersion:16336691,Generation:2,CreationTimestamp:2019-12-28 10:57:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2019-12-28 10:57:58 +0000 UTC 2019-12-28 10:57:58 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-12-28 10:57:58 +0000 UTC 2019-12-28 10:57:47 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Dec 28 10:57:59.023: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-rdfng,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rdfng/replicasets/test-recreate-deployment-589c4bfd,UID:e8b5237a-2960-11ea-a994-fa163e34d433,ResourceVersion:16336689,Generation:1,CreationTimestamp:2019-12-28 10:57:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment e25c2d31-2960-11ea-a994-fa163e34d433 0xc00151824f 0xc001518260}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 28 10:57:59.024: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Dec 28 10:57:59.024: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-rdfng,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rdfng/replicasets/test-recreate-deployment-5bf7f65dc,UID:e2616665-2960-11ea-a994-fa163e34d433,ResourceVersion:16336680,Generation:2,CreationTimestamp:2019-12-28 10:57:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment e25c2d31-2960-11ea-a994-fa163e34d433 0xc0015185d0 0xc0015185d1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 28 10:57:59.091: INFO: Pod "test-recreate-deployment-589c4bfd-smsvk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-smsvk,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-rdfng,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rdfng/pods/test-recreate-deployment-589c4bfd-smsvk,UID:e8bf9a8d-2960-11ea-a994-fa163e34d433,ResourceVersion:16336693,Generation:0,CreationTimestamp:2019-12-28 10:57:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd e8b5237a-2960-11ea-a994-fa163e34d433 0xc00151908f 0xc0015190a0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-sjck5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sjck5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-sjck5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001519100} {node.kubernetes.io/unreachable Exists NoExecute 0xc001519120}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 10:57:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 10:57:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 10:57:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 10:57:58 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-28 10:57:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 28 10:57:59.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-rdfng" for this suite. Dec 28 10:58:07.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 10:58:07.872: INFO: namespace: e2e-tests-deployment-rdfng, resource: bindings, ignored listing per whitelist Dec 28 10:58:07.937: INFO: namespace e2e-tests-deployment-rdfng deletion completed in 8.830451791s • [SLOW TEST:20.234 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 28 10:58:07.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 28 10:58:08.179: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Dec 28 10:58:08.198: INFO: Number of nodes with available pods: 0 Dec 28 10:58:08.198: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Dec 28 10:58:08.507: INFO: Number of nodes with available pods: 0 Dec 28 10:58:08.507: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 28 10:58:09.963: INFO: Number of nodes with available pods: 0 Dec 28 10:58:09.963: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 28 10:58:10.541: INFO: Number of nodes with available pods: 0 Dec 28 10:58:10.542: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 28 10:58:11.667: INFO: Number of nodes with available pods: 0 Dec 28 10:58:11.667: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 28 10:58:12.535: INFO: Number of nodes with available pods: 0 Dec 28 10:58:12.535: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 28 10:58:14.185: INFO: Number of nodes with available pods: 0 Dec 28 10:58:14.185: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 28 10:58:14.626: INFO: Number of nodes with available pods: 0 Dec 28 10:58:14.626: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 28 10:58:15.698: INFO: Number of nodes with available pods: 0 Dec 28 10:58:15.698: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 28 10:58:16.521: INFO: Number of nodes with available pods: 0 Dec 28 10:58:16.521: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 28 10:58:17.525: INFO: Number of nodes with available pods: 1 Dec 28 10:58:17.525: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Dec 28 10:58:17.601: INFO: Number of nodes with available pods: 1 Dec 28 10:58:17.601: INFO: Number of running nodes: 0, number of available pods: 1 Dec 28 10:58:18.618: INFO: Number of nodes with available pods: 0 Dec 28 10:58:18.618: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Dec 28 10:58:18.697: INFO: Number of nodes with available pods: 0 Dec 28 10:58:18.697: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 28 10:58:19.718: INFO: Number of nodes with available pods: 0 Dec 28 10:58:19.718: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 28 10:58:20.712: INFO: Number of nodes with available pods: 0 Dec 28 10:58:20.712: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 28 10:58:21.713: INFO: Number of nodes with available pods: 0 Dec 28 10:58:21.713: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 28 10:58:22.859: INFO: Number of nodes with available pods: 0 Dec 28 10:58:22.859: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 28 10:58:23.720: INFO: Number of nodes with available pods: 0 Dec 28 10:58:23.720: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 28 10:58:24.752: INFO: Number of nodes with available pods: 0 Dec 28 10:58:24.752: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 28 10:58:25.846: INFO: Number of nodes with available pods: 0 Dec 28 10:58:25.846: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 28 10:58:26.818: INFO: Number of nodes with available pods: 0 Dec 28 10:58:26.818: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 28 10:58:27.709: INFO: Number of nodes with available pods: 0 Dec 28 10:58:27.709: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 28 10:58:28.748: INFO: Number of nodes with available pods: 0 Dec 28 10:58:28.748: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 28 10:58:29.736: INFO: Number of nodes with available pods: 0 Dec 28 10:58:29.737: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 28 10:58:30.751: INFO: Number of nodes with available pods: 0 Dec 28 10:58:30.751: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 28 10:58:31.746: INFO: Number of nodes with available pods: 0 Dec 28 10:58:31.746: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 28 10:58:32.727: INFO: Number of nodes with available pods: 0 Dec 28 10:58:32.727: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 28 10:58:33.774: INFO: Number of nodes with available pods: 0 Dec 28 10:58:33.774: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 28 10:58:34.720: INFO: Number of nodes with available pods: 0 Dec 28 10:58:34.720: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 28 10:58:35.842: INFO: Number of nodes with available pods: 1 Dec 28 10:58:35.842: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-vkzff, will wait for the garbage collector to delete the pods Dec 28 10:58:36.011: INFO: Deleting DaemonSet.extensions daemon-set took: 88.259464ms Dec 28 10:58:36.111: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.248942ms Dec 28 10:58:45.054: INFO: Number of nodes with available pods: 0 Dec 28 10:58:45.054: INFO: Number of running nodes: 0, number of available pods: 0 Dec 28 10:58:45.066: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-vkzff/daemonsets","resourceVersion":"16336820"},"items":null} Dec 28 10:58:45.082: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-vkzff/pods","resourceVersion":"16336820"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 28 10:58:45.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-vkzff" for this suite. Dec 28 10:58:51.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 10:58:51.555: INFO: namespace: e2e-tests-daemonsets-vkzff, resource: bindings, ignored listing per whitelist Dec 28 10:58:51.623: INFO: namespace e2e-tests-daemonsets-vkzff deletion completed in 6.332978625s • [SLOW TEST:43.685 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 28 10:58:51.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Dec 28 11:01:56.324: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 28 11:01:56.478: INFO: Pod pod-with-poststart-exec-hook still exists Dec 28 11:01:58.479: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 28 11:01:58.500: INFO: Pod pod-with-poststart-exec-hook still exists Dec 28 11:02:00.479: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 28 11:02:00.495: INFO: Pod pod-with-poststart-exec-hook still exists Dec 28 11:02:02.479: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 28 11:02:02.512: INFO: Pod pod-with-poststart-exec-hook still exists Dec 28 11:02:04.479: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 28 11:02:04.511: INFO: Pod pod-with-poststart-exec-hook still exists Dec 28 11:02:06.479: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 28 11:02:06.510: INFO: Pod pod-with-poststart-exec-hook still exists Dec 28 11:02:08.479: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 28 11:02:08.500: INFO: Pod pod-with-poststart-exec-hook still exists Dec 28 11:02:10.479: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 28 11:02:10.490: INFO: Pod pod-with-poststart-exec-hook still exists Dec 28 11:02:12.479: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 28 11:02:12.766: INFO: Pod pod-with-poststart-exec-hook still exists Dec 28 11:02:14.479: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 28 11:02:14.491: INFO: Pod pod-with-poststart-exec-hook still exists Dec 28 11:02:16.479: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 28 11:02:16.495: INFO: Pod pod-with-poststart-exec-hook still exists Dec 28 11:02:18.479: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 28 11:02:18.515: INFO: Pod pod-with-poststart-exec-hook still exists Dec 28 11:02:20.479: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 28 11:02:20.509: INFO: Pod pod-with-poststart-exec-hook still exists Dec 28 11:02:22.479: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 28 11:02:22.495: INFO: Pod pod-with-poststart-exec-hook still exists Dec 28 11:02:24.479: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 28 11:02:24.498: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 28 11:02:24.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-hxtfv" for this suite. Dec 28 11:02:58.637: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 11:02:58.715: INFO: namespace: e2e-tests-container-lifecycle-hook-hxtfv, resource: bindings, ignored listing per whitelist Dec 28 11:02:58.745: INFO: namespace e2e-tests-container-lifecycle-hook-hxtfv deletion completed in 34.231957171s • [SLOW TEST:247.122 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 28 11:02:58.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Dec 28 11:02:58.987: INFO: Waiting up to 5m0s for pod "pod-9bbd0273-2961-11ea-8e71-0242ac110005" in namespace "e2e-tests-emptydir-9fq45" to be "success or failure" Dec 28 11:02:59.036: INFO: Pod "pod-9bbd0273-2961-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 49.22845ms Dec 28 11:03:01.044: INFO: Pod "pod-9bbd0273-2961-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056985979s Dec 28 11:03:03.053: INFO: Pod "pod-9bbd0273-2961-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066234443s Dec 28 11:03:05.524: INFO: Pod "pod-9bbd0273-2961-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.536406909s Dec 28 11:03:07.538: INFO: Pod "pod-9bbd0273-2961-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.550877694s Dec 28 11:03:09.557: INFO: Pod "pod-9bbd0273-2961-11ea-8e71-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.569298071s Dec 28 11:03:11.572: INFO: Pod "pod-9bbd0273-2961-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.58512466s STEP: Saw pod success Dec 28 11:03:11.572: INFO: Pod "pod-9bbd0273-2961-11ea-8e71-0242ac110005" satisfied condition "success or failure" Dec 28 11:03:11.578: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-9bbd0273-2961-11ea-8e71-0242ac110005 container test-container: STEP: delete the pod Dec 28 11:03:12.174: INFO: Waiting for pod pod-9bbd0273-2961-11ea-8e71-0242ac110005 to disappear Dec 28 11:03:12.199: INFO: Pod pod-9bbd0273-2961-11ea-8e71-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 28 11:03:12.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-9fq45" for this suite. Dec 28 11:03:20.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 11:03:20.579: INFO: namespace: e2e-tests-emptydir-9fq45, resource: bindings, ignored listing per whitelist Dec 28 11:03:20.654: INFO: namespace e2e-tests-emptydir-9fq45 deletion completed in 8.439250266s • [SLOW TEST:21.909 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 28 11:03:20.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-a8c58aaa-2961-11ea-8e71-0242ac110005 STEP: Creating a pod to test consume configMaps Dec 28 11:03:20.850: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a8c63b48-2961-11ea-8e71-0242ac110005" in namespace "e2e-tests-projected-rd8jf" to be "success or failure" Dec 28 11:03:20.878: INFO: Pod "pod-projected-configmaps-a8c63b48-2961-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 28.256251ms Dec 28 11:03:22.907: INFO: Pod "pod-projected-configmaps-a8c63b48-2961-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056727251s Dec 28 11:03:24.983: INFO: Pod "pod-projected-configmaps-a8c63b48-2961-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.133351436s Dec 28 11:03:27.500: INFO: Pod "pod-projected-configmaps-a8c63b48-2961-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.649585136s Dec 28 11:03:29.519: INFO: Pod "pod-projected-configmaps-a8c63b48-2961-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.668844649s Dec 28 11:03:31.540: INFO: Pod "pod-projected-configmaps-a8c63b48-2961-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.689980471s STEP: Saw pod success Dec 28 11:03:31.540: INFO: Pod "pod-projected-configmaps-a8c63b48-2961-11ea-8e71-0242ac110005" satisfied condition "success or failure" Dec 28 11:03:31.546: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-a8c63b48-2961-11ea-8e71-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Dec 28 11:03:31.642: INFO: Waiting for pod pod-projected-configmaps-a8c63b48-2961-11ea-8e71-0242ac110005 to disappear Dec 28 11:03:31.720: INFO: Pod pod-projected-configmaps-a8c63b48-2961-11ea-8e71-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 28 11:03:31.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-rd8jf" for this suite. Dec 28 11:03:37.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 11:03:38.040: INFO: namespace: e2e-tests-projected-rd8jf, resource: bindings, ignored listing per whitelist Dec 28 11:03:38.087: INFO: namespace e2e-tests-projected-rd8jf deletion completed in 6.339864885s • [SLOW TEST:17.432 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 28 11:03:38.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-b326f185-2961-11ea-8e71-0242ac110005 STEP: Creating a pod to test consume secrets Dec 28 11:03:38.318: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b32e63d8-2961-11ea-8e71-0242ac110005" in namespace "e2e-tests-projected-b2c29" to be "success or failure" Dec 28 11:03:38.379: INFO: Pod "pod-projected-secrets-b32e63d8-2961-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 60.368902ms Dec 28 11:03:40.400: INFO: Pod "pod-projected-secrets-b32e63d8-2961-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081376752s Dec 28 11:03:42.411: INFO: Pod "pod-projected-secrets-b32e63d8-2961-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092377244s Dec 28 11:03:44.456: INFO: Pod "pod-projected-secrets-b32e63d8-2961-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.13749146s Dec 28 11:03:46.486: INFO: Pod "pod-projected-secrets-b32e63d8-2961-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.167059635s Dec 28 11:03:48.496: INFO: Pod "pod-projected-secrets-b32e63d8-2961-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.177456901s Dec 28 11:03:50.524: INFO: Pod "pod-projected-secrets-b32e63d8-2961-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.205423098s STEP: Saw pod success Dec 28 11:03:50.524: INFO: Pod "pod-projected-secrets-b32e63d8-2961-11ea-8e71-0242ac110005" satisfied condition "success or failure" Dec 28 11:03:50.536: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-b32e63d8-2961-11ea-8e71-0242ac110005 container secret-volume-test: STEP: delete the pod Dec 28 11:03:50.894: INFO: Waiting for pod pod-projected-secrets-b32e63d8-2961-11ea-8e71-0242ac110005 to disappear Dec 28 11:03:51.003: INFO: Pod pod-projected-secrets-b32e63d8-2961-11ea-8e71-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 28 11:03:51.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-b2c29" for this suite. Dec 28 11:03:57.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 28 11:03:57.289: INFO: namespace: e2e-tests-projected-b2c29, resource: bindings, ignored listing per whitelist Dec 28 11:03:57.312: INFO: namespace e2e-tests-projected-b2c29 deletion completed in 6.299402523s • [SLOW TEST:19.226 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 28 11:03:57.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 28 11:03:57.568: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 70.033628ms)
Dec 28 11:03:57.582: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.020175ms)
Dec 28 11:03:57.598: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.944662ms)
Dec 28 11:03:57.607: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.01821ms)
Dec 28 11:03:57.618: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.668388ms)
Dec 28 11:03:57.626: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.501834ms)
Dec 28 11:03:57.634: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.530247ms)
Dec 28 11:03:57.642: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.382681ms)
Dec 28 11:03:57.651: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.531909ms)
Dec 28 11:03:57.658: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.263898ms)
Dec 28 11:03:57.665: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.268565ms)
Dec 28 11:03:57.672: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.577758ms)
Dec 28 11:03:57.680: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.772329ms)
Dec 28 11:03:57.686: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.025793ms)
Dec 28 11:03:57.692: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.167957ms)
Dec 28 11:03:57.698: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.614781ms)
Dec 28 11:03:57.703: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.009556ms)
Dec 28 11:03:57.710: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.123542ms)
Dec 28 11:03:57.717: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.902941ms)
Dec 28 11:03:57.722: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.493455ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:03:57.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-tbrw4" for this suite.
Dec 28 11:04:03.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:04:04.024: INFO: namespace: e2e-tests-proxy-tbrw4, resource: bindings, ignored listing per whitelist
Dec 28 11:04:04.032: INFO: namespace e2e-tests-proxy-tbrw4 deletion completed in 6.303691703s

• [SLOW TEST:6.719 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:04:04.032: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 28 11:04:04.288: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Dec 28 11:04:04.325: INFO: Number of nodes with available pods: 0
Dec 28 11:04:04.325: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 11:04:05.615: INFO: Number of nodes with available pods: 0
Dec 28 11:04:05.615: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 11:04:06.357: INFO: Number of nodes with available pods: 0
Dec 28 11:04:06.357: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 11:04:07.348: INFO: Number of nodes with available pods: 0
Dec 28 11:04:07.348: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 11:04:08.483: INFO: Number of nodes with available pods: 0
Dec 28 11:04:08.483: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 11:04:09.400: INFO: Number of nodes with available pods: 0
Dec 28 11:04:09.400: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 11:04:11.336: INFO: Number of nodes with available pods: 0
Dec 28 11:04:11.336: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 11:04:12.343: INFO: Number of nodes with available pods: 0
Dec 28 11:04:12.343: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 11:04:13.368: INFO: Number of nodes with available pods: 0
Dec 28 11:04:13.368: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 11:04:14.360: INFO: Number of nodes with available pods: 0
Dec 28 11:04:14.360: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 11:04:15.344: INFO: Number of nodes with available pods: 1
Dec 28 11:04:15.344: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Dec 28 11:04:15.444: INFO: Wrong image for pod: daemon-set-8qckx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 11:04:16.512: INFO: Wrong image for pod: daemon-set-8qckx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 11:04:17.494: INFO: Wrong image for pod: daemon-set-8qckx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 11:04:18.522: INFO: Wrong image for pod: daemon-set-8qckx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 11:04:19.488: INFO: Wrong image for pod: daemon-set-8qckx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 11:04:20.506: INFO: Wrong image for pod: daemon-set-8qckx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 11:04:21.483: INFO: Wrong image for pod: daemon-set-8qckx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 11:04:21.483: INFO: Pod daemon-set-8qckx is not available
Dec 28 11:04:22.498: INFO: Wrong image for pod: daemon-set-8qckx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 11:04:22.498: INFO: Pod daemon-set-8qckx is not available
Dec 28 11:04:23.492: INFO: Wrong image for pod: daemon-set-8qckx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 11:04:23.492: INFO: Pod daemon-set-8qckx is not available
Dec 28 11:04:24.531: INFO: Wrong image for pod: daemon-set-8qckx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 11:04:24.531: INFO: Pod daemon-set-8qckx is not available
Dec 28 11:04:25.492: INFO: Wrong image for pod: daemon-set-8qckx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 11:04:25.492: INFO: Pod daemon-set-8qckx is not available
Dec 28 11:04:26.503: INFO: Wrong image for pod: daemon-set-8qckx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 11:04:26.503: INFO: Pod daemon-set-8qckx is not available
Dec 28 11:04:27.489: INFO: Wrong image for pod: daemon-set-8qckx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 11:04:27.489: INFO: Pod daemon-set-8qckx is not available
Dec 28 11:04:28.499: INFO: Wrong image for pod: daemon-set-8qckx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 11:04:28.499: INFO: Pod daemon-set-8qckx is not available
Dec 28 11:04:29.489: INFO: Wrong image for pod: daemon-set-8qckx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 11:04:29.489: INFO: Pod daemon-set-8qckx is not available
Dec 28 11:04:30.503: INFO: Wrong image for pod: daemon-set-8qckx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 11:04:30.503: INFO: Pod daemon-set-8qckx is not available
Dec 28 11:04:31.488: INFO: Wrong image for pod: daemon-set-8qckx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 11:04:31.488: INFO: Pod daemon-set-8qckx is not available
Dec 28 11:04:32.496: INFO: Wrong image for pod: daemon-set-8qckx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 28 11:04:32.496: INFO: Pod daemon-set-8qckx is not available
Dec 28 11:04:33.492: INFO: Pod daemon-set-httmn is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Dec 28 11:04:33.506: INFO: Number of nodes with available pods: 0
Dec 28 11:04:33.506: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 11:04:35.193: INFO: Number of nodes with available pods: 0
Dec 28 11:04:35.193: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 11:04:35.523: INFO: Number of nodes with available pods: 0
Dec 28 11:04:35.523: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 11:04:36.558: INFO: Number of nodes with available pods: 0
Dec 28 11:04:36.559: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 11:04:37.547: INFO: Number of nodes with available pods: 0
Dec 28 11:04:37.547: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 11:04:38.540: INFO: Number of nodes with available pods: 0
Dec 28 11:04:38.540: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 11:04:40.173: INFO: Number of nodes with available pods: 0
Dec 28 11:04:40.173: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 11:04:40.556: INFO: Number of nodes with available pods: 0
Dec 28 11:04:40.556: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 11:04:41.537: INFO: Number of nodes with available pods: 0
Dec 28 11:04:41.537: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 11:04:42.565: INFO: Number of nodes with available pods: 0
Dec 28 11:04:42.565: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 11:04:43.532: INFO: Number of nodes with available pods: 1
Dec 28 11:04:43.532: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-b9dx5, will wait for the garbage collector to delete the pods
Dec 28 11:04:43.639: INFO: Deleting DaemonSet.extensions daemon-set took: 21.403623ms
Dec 28 11:04:43.739: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.34655ms
Dec 28 11:04:50.751: INFO: Number of nodes with available pods: 0
Dec 28 11:04:50.751: INFO: Number of running nodes: 0, number of available pods: 0
Dec 28 11:04:50.760: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-b9dx5/daemonsets","resourceVersion":"16337447"},"items":null}

Dec 28 11:04:50.769: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-b9dx5/pods","resourceVersion":"16337447"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:04:50.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-b9dx5" for this suite.
Dec 28 11:04:56.899: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:04:56.984: INFO: namespace: e2e-tests-daemonsets-b9dx5, resource: bindings, ignored listing per whitelist
Dec 28 11:04:57.072: INFO: namespace e2e-tests-daemonsets-b9dx5 deletion completed in 6.259174895s

• [SLOW TEST:53.040 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:04:57.072: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating cluster-info
Dec 28 11:04:57.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Dec 28 11:04:57.379: INFO: stderr: ""
Dec 28 11:04:57.379: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:04:57.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-wzllf" for this suite.
Dec 28 11:05:03.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:05:03.553: INFO: namespace: e2e-tests-kubectl-wzllf, resource: bindings, ignored listing per whitelist
Dec 28 11:05:03.589: INFO: namespace e2e-tests-kubectl-wzllf deletion completed in 6.199241915s

• [SLOW TEST:6.517 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:05:03.589: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-e623187a-2961-11ea-8e71-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 28 11:05:03.820: INFO: Waiting up to 5m0s for pod "pod-configmaps-e624f4f8-2961-11ea-8e71-0242ac110005" in namespace "e2e-tests-configmap-hlkwp" to be "success or failure"
Dec 28 11:05:03.891: INFO: Pod "pod-configmaps-e624f4f8-2961-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 70.241834ms
Dec 28 11:05:06.264: INFO: Pod "pod-configmaps-e624f4f8-2961-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.443728459s
Dec 28 11:05:08.281: INFO: Pod "pod-configmaps-e624f4f8-2961-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.460314857s
Dec 28 11:05:10.788: INFO: Pod "pod-configmaps-e624f4f8-2961-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.967170439s
Dec 28 11:05:12.867: INFO: Pod "pod-configmaps-e624f4f8-2961-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.046997229s
Dec 28 11:05:14.962: INFO: Pod "pod-configmaps-e624f4f8-2961-11ea-8e71-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 11.141305814s
Dec 28 11:05:17.362: INFO: Pod "pod-configmaps-e624f4f8-2961-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.541167424s
STEP: Saw pod success
Dec 28 11:05:17.362: INFO: Pod "pod-configmaps-e624f4f8-2961-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 11:05:17.376: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-e624f4f8-2961-11ea-8e71-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 28 11:05:17.570: INFO: Waiting for pod pod-configmaps-e624f4f8-2961-11ea-8e71-0242ac110005 to disappear
Dec 28 11:05:17.609: INFO: Pod pod-configmaps-e624f4f8-2961-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:05:17.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-hlkwp" for this suite.
Dec 28 11:05:23.717: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:05:23.916: INFO: namespace: e2e-tests-configmap-hlkwp, resource: bindings, ignored listing per whitelist
Dec 28 11:05:23.950: INFO: namespace e2e-tests-configmap-hlkwp deletion completed in 6.295251366s

• [SLOW TEST:20.361 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:05:23.950: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 28 11:05:24.228: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:05:35.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-rfzpb" for this suite.
Dec 28 11:06:29.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:06:29.273: INFO: namespace: e2e-tests-pods-rfzpb, resource: bindings, ignored listing per whitelist
Dec 28 11:06:29.296: INFO: namespace e2e-tests-pods-rfzpb deletion completed in 54.227694158s

• [SLOW TEST:65.346 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:06:29.297: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-1946540d-2962-11ea-8e71-0242ac110005
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-1946540d-2962-11ea-8e71-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:07:47.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-9kmzc" for this suite.
Dec 28 11:08:11.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:08:11.945: INFO: namespace: e2e-tests-configmap-9kmzc, resource: bindings, ignored listing per whitelist
Dec 28 11:08:12.088: INFO: namespace e2e-tests-configmap-9kmzc deletion completed in 24.273784706s

• [SLOW TEST:102.791 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:08:12.088: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 28 11:08:12.528: INFO: Waiting up to 5m0s for pod "downwardapi-volume-569d7d7c-2962-11ea-8e71-0242ac110005" in namespace "e2e-tests-projected-68pq8" to be "success or failure"
Dec 28 11:08:12.568: INFO: Pod "downwardapi-volume-569d7d7c-2962-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 38.995155ms
Dec 28 11:08:14.685: INFO: Pod "downwardapi-volume-569d7d7c-2962-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156398422s
Dec 28 11:08:16.709: INFO: Pod "downwardapi-volume-569d7d7c-2962-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.180185248s
Dec 28 11:08:19.029: INFO: Pod "downwardapi-volume-569d7d7c-2962-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.50056509s
Dec 28 11:08:21.051: INFO: Pod "downwardapi-volume-569d7d7c-2962-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.522375861s
Dec 28 11:08:23.067: INFO: Pod "downwardapi-volume-569d7d7c-2962-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.538434232s
Dec 28 11:08:25.084: INFO: Pod "downwardapi-volume-569d7d7c-2962-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.555274484s
STEP: Saw pod success
Dec 28 11:08:25.084: INFO: Pod "downwardapi-volume-569d7d7c-2962-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 11:08:25.101: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-569d7d7c-2962-11ea-8e71-0242ac110005 container client-container: 
STEP: delete the pod
Dec 28 11:08:25.403: INFO: Waiting for pod downwardapi-volume-569d7d7c-2962-11ea-8e71-0242ac110005 to disappear
Dec 28 11:08:25.521: INFO: Pod downwardapi-volume-569d7d7c-2962-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:08:25.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-68pq8" for this suite.
Dec 28 11:08:31.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:08:31.714: INFO: namespace: e2e-tests-projected-68pq8, resource: bindings, ignored listing per whitelist
Dec 28 11:08:31.789: INFO: namespace e2e-tests-projected-68pq8 deletion completed in 6.260029142s

• [SLOW TEST:19.701 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:08:31.789: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 28 11:08:32.186: INFO: Waiting up to 5m0s for pod "pod-624e664e-2962-11ea-8e71-0242ac110005" in namespace "e2e-tests-emptydir-fgt26" to be "success or failure"
Dec 28 11:08:32.205: INFO: Pod "pod-624e664e-2962-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.517238ms
Dec 28 11:08:34.612: INFO: Pod "pod-624e664e-2962-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.425679263s
Dec 28 11:08:36.640: INFO: Pod "pod-624e664e-2962-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.452954726s
Dec 28 11:08:38.702: INFO: Pod "pod-624e664e-2962-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.515229678s
Dec 28 11:08:40.870: INFO: Pod "pod-624e664e-2962-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.683064726s
Dec 28 11:08:42.988: INFO: Pod "pod-624e664e-2962-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.800974354s
Dec 28 11:08:44.995: INFO: Pod "pod-624e664e-2962-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.808706084s
STEP: Saw pod success
Dec 28 11:08:44.995: INFO: Pod "pod-624e664e-2962-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 11:08:44.998: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-624e664e-2962-11ea-8e71-0242ac110005 container test-container: 
STEP: delete the pod
Dec 28 11:08:45.601: INFO: Waiting for pod pod-624e664e-2962-11ea-8e71-0242ac110005 to disappear
Dec 28 11:08:46.602: INFO: Pod pod-624e664e-2962-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:08:46.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-fgt26" for this suite.
Dec 28 11:08:52.832: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:08:52.909: INFO: namespace: e2e-tests-emptydir-fgt26, resource: bindings, ignored listing per whitelist
Dec 28 11:08:53.150: INFO: namespace e2e-tests-emptydir-fgt26 deletion completed in 6.512333098s

• [SLOW TEST:21.361 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:08:53.150: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W1228 11:09:03.405857       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 28 11:09:03.405: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:09:03.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-8kmj2" for this suite.
Dec 28 11:09:09.557: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:09:09.614: INFO: namespace: e2e-tests-gc-8kmj2, resource: bindings, ignored listing per whitelist
Dec 28 11:09:09.903: INFO: namespace e2e-tests-gc-8kmj2 deletion completed in 6.493840974s

• [SLOW TEST:16.753 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:09:09.903: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 28 11:09:10.129: INFO: Creating deployment "nginx-deployment"
Dec 28 11:09:10.140: INFO: Waiting for observed generation 1
Dec 28 11:09:13.021: INFO: Waiting for all required pods to come up
Dec 28 11:09:13.043: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Dec 28 11:09:58.672: INFO: Waiting for deployment "nginx-deployment" to complete
Dec 28 11:09:58.689: INFO: Updating deployment "nginx-deployment" with a non-existent image
Dec 28 11:09:58.739: INFO: Updating deployment nginx-deployment
Dec 28 11:09:58.739: INFO: Waiting for observed generation 2
Dec 28 11:10:00.823: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Dec 28 11:10:03.020: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Dec 28 11:10:03.041: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Dec 28 11:10:03.341: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Dec 28 11:10:03.341: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Dec 28 11:10:03.367: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Dec 28 11:10:03.373: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Dec 28 11:10:03.373: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Dec 28 11:10:03.382: INFO: Updating deployment nginx-deployment
Dec 28 11:10:03.382: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Dec 28 11:10:04.837: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Dec 28 11:10:08.059: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 28 11:10:10.209: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-8qlpf,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-8qlpf/deployments/nginx-deployment,UID:78f9043b-2962-11ea-a994-fa163e34d433,ResourceVersion:16338220,Generation:3,CreationTimestamp:2019-12-28 11:09:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:25,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2019-12-28 11:09:59 +0000 UTC 2019-12-28 11:09:10 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2019-12-28 11:10:05 +0000 UTC 2019-12-28 11:10:05 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Dec 28 11:10:10.403: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-8qlpf,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-8qlpf/replicasets/nginx-deployment-5c98f8fb5,UID:95f1ab1f-2962-11ea-a994-fa163e34d433,ResourceVersion:16338218,Generation:3,CreationTimestamp:2019-12-28 11:09:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 78f9043b-2962-11ea-a994-fa163e34d433 0xc001338717 0xc001338718}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 28 11:10:10.403: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Dec 28 11:10:10.404: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-8qlpf,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-8qlpf/replicasets/nginx-deployment-85ddf47c5d,UID:78fc43c2-2962-11ea-a994-fa163e34d433,ResourceVersion:16338216,Generation:3,CreationTimestamp:2019-12-28 11:09:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 78f9043b-2962-11ea-a994-fa163e34d433 0xc0013387d7 0xc0013387d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Dec 28 11:10:11.271: INFO: Pod "nginx-deployment-5c98f8fb5-2z22h" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-2z22h,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8qlpf,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8qlpf/pods/nginx-deployment-5c98f8fb5-2z22h,UID:9600ae31-2962-11ea-a994-fa163e34d433,ResourceVersion:16338135,Generation:0,CreationTimestamp:2019-12-28 11:09:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 95f1ab1f-2962-11ea-a994-fa163e34d433 0xc001304dc7 0xc001304dc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dgqpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dgqpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-dgqpv true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001304ee0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001304f00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:09:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:09:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:09:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:09:58 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-28 11:09:59 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 11:10:11.271: INFO: Pod "nginx-deployment-5c98f8fb5-bnwhj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-bnwhj,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8qlpf,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8qlpf/pods/nginx-deployment-5c98f8fb5-bnwhj,UID:95fd585e-2962-11ea-a994-fa163e34d433,ResourceVersion:16338126,Generation:0,CreationTimestamp:2019-12-28 11:09:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 95f1ab1f-2962-11ea-a994-fa163e34d433 0xc001304fc7 0xc001304fc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dgqpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dgqpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-dgqpv true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001305030} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0013050c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:09:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:09:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:09:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:09:58 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-28 11:09:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 11:10:11.271: INFO: Pod "nginx-deployment-5c98f8fb5-ctshd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-ctshd,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8qlpf,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8qlpf/pods/nginx-deployment-5c98f8fb5-ctshd,UID:9b8b92ef-2962-11ea-a994-fa163e34d433,ResourceVersion:16338212,Generation:0,CreationTimestamp:2019-12-28 11:10:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 95f1ab1f-2962-11ea-a994-fa163e34d433 0xc001305187 0xc001305188}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dgqpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dgqpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-dgqpv true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0013051f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001305210}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:10:08 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 11:10:11.271: INFO: Pod "nginx-deployment-5c98f8fb5-fzxqr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-fzxqr,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8qlpf,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8qlpf/pods/nginx-deployment-5c98f8fb5-fzxqr,UID:9b8db6c8-2962-11ea-a994-fa163e34d433,ResourceVersion:16338208,Generation:0,CreationTimestamp:2019-12-28 11:10:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 95f1ab1f-2962-11ea-a994-fa163e34d433 0xc001305287 0xc001305288}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dgqpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dgqpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-dgqpv true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0013052f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001305310}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:10:08 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 11:10:11.271: INFO: Pod "nginx-deployment-5c98f8fb5-jdk68" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-jdk68,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8qlpf,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8qlpf/pods/nginx-deployment-5c98f8fb5-jdk68,UID:9b7b8630-2962-11ea-a994-fa163e34d433,ResourceVersion:16338189,Generation:0,CreationTimestamp:2019-12-28 11:10:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 95f1ab1f-2962-11ea-a994-fa163e34d433 0xc001305387 0xc001305388}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dgqpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dgqpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-dgqpv true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0013053f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001305410}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:10:08 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 11:10:11.272: INFO: Pod "nginx-deployment-5c98f8fb5-jrsb4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-jrsb4,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8qlpf,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8qlpf/pods/nginx-deployment-5c98f8fb5-jrsb4,UID:9ac8b9c7-2962-11ea-a994-fa163e34d433,ResourceVersion:16338184,Generation:0,CreationTimestamp:2019-12-28 11:10:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 95f1ab1f-2962-11ea-a994-fa163e34d433 0xc001305487 0xc001305488}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dgqpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dgqpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-dgqpv true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0013054f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001305510}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:10:08 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 11:10:11.272: INFO: Pod "nginx-deployment-5c98f8fb5-l277q" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-l277q,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8qlpf,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8qlpf/pods/nginx-deployment-5c98f8fb5-l277q,UID:9601952e-2962-11ea-a994-fa163e34d433,ResourceVersion:16338151,Generation:0,CreationTimestamp:2019-12-28 11:09:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 95f1ab1f-2962-11ea-a994-fa163e34d433 0xc001305587 0xc001305588}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dgqpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dgqpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-dgqpv true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0013055f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001305610}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:09:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:09:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:09:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:09:58 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-28 11:09:59 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 11:10:11.272: INFO: Pod "nginx-deployment-5c98f8fb5-l6hp8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-l6hp8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8qlpf,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8qlpf/pods/nginx-deployment-5c98f8fb5-l6hp8,UID:9c0c9a5d-2962-11ea-a994-fa163e34d433,ResourceVersion:16338219,Generation:0,CreationTimestamp:2019-12-28 11:10:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 95f1ab1f-2962-11ea-a994-fa163e34d433 0xc0013056d7 0xc0013056d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dgqpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dgqpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-dgqpv true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001305740} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001305760}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:10:10 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 11:10:11.272: INFO: Pod "nginx-deployment-5c98f8fb5-lfgnh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-lfgnh,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8qlpf,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8qlpf/pods/nginx-deployment-5c98f8fb5-lfgnh,UID:9b7bf935-2962-11ea-a994-fa163e34d433,ResourceVersion:16338200,Generation:0,CreationTimestamp:2019-12-28 11:10:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 95f1ab1f-2962-11ea-a994-fa163e34d433 0xc0013057d7 0xc0013057d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dgqpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dgqpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-dgqpv true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001305840} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001305860}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:10:08 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 11:10:11.272: INFO: Pod "nginx-deployment-5c98f8fb5-llxkj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-llxkj,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8qlpf,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8qlpf/pods/nginx-deployment-5c98f8fb5-llxkj,UID:9b8c447d-2962-11ea-a994-fa163e34d433,ResourceVersion:16338213,Generation:0,CreationTimestamp:2019-12-28 11:10:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 95f1ab1f-2962-11ea-a994-fa163e34d433 0xc0013058d7 0xc0013058d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dgqpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dgqpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-dgqpv true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001305940} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001305960}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:10:08 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 11:10:11.272: INFO: Pod "nginx-deployment-5c98f8fb5-qqxm5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-qqxm5,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8qlpf,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8qlpf/pods/nginx-deployment-5c98f8fb5-qqxm5,UID:9683e717-2962-11ea-a994-fa163e34d433,ResourceVersion:16338156,Generation:0,CreationTimestamp:2019-12-28 11:09:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 95f1ab1f-2962-11ea-a994-fa163e34d433 0xc0013059d7 0xc0013059d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dgqpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dgqpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-dgqpv true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001305a40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001305a60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:10:03 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:10:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:10:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:09:59 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-28 11:10:03 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 11:10:11.273: INFO: Pod "nginx-deployment-5c98f8fb5-tbsdb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-tbsdb,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8qlpf,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8qlpf/pods/nginx-deployment-5c98f8fb5-tbsdb,UID:964cba34-2962-11ea-a994-fa163e34d433,ResourceVersion:16338153,Generation:0,CreationTimestamp:2019-12-28 11:09:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 95f1ab1f-2962-11ea-a994-fa163e34d433 0xc001305b27 0xc001305b28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dgqpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dgqpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-dgqpv true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001305b90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001305bb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:10:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:10:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:10:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:09:59 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-28 11:10:00 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 11:10:11.273: INFO: Pod "nginx-deployment-5c98f8fb5-tnc87" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-tnc87,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-8qlpf,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8qlpf/pods/nginx-deployment-5c98f8fb5-tnc87,UID:9b8ca789-2962-11ea-a994-fa163e34d433,ResourceVersion:16338207,Generation:0,CreationTimestamp:2019-12-28 11:10:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 95f1ab1f-2962-11ea-a994-fa163e34d433 0xc001305c77 0xc001305c78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dgqpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dgqpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-dgqpv true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001305ce0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001305d00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:10:08 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 11:10:11.273: INFO: Pod "nginx-deployment-85ddf47c5d-2xf6s" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2xf6s,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8qlpf,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8qlpf/pods/nginx-deployment-85ddf47c5d-2xf6s,UID:7910675f-2962-11ea-a994-fa163e34d433,ResourceVersion:16338074,Generation:0,CreationTimestamp:2019-12-28 11:09:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 78fc43c2-2962-11ea-a994-fa163e34d433 0xc001305d77 0xc001305d78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dgqpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dgqpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-dgqpv true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001305e30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001305e50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:09:12 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:09:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:09:51 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:09:10 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.10,StartTime:2019-12-28 11:09:12 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-28 11:09:50 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://168822738b0ae0e4d6628d924589828dcf5528f267b5884f6767f09562d8602a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 11:10:11.273: INFO: Pod "nginx-deployment-85ddf47c5d-4z9bk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-4z9bk,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8qlpf,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8qlpf/pods/nginx-deployment-85ddf47c5d-4z9bk,UID:9ac6f9f1-2962-11ea-a994-fa163e34d433,ResourceVersion:16338185,Generation:0,CreationTimestamp:2019-12-28 11:10:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 78fc43c2-2962-11ea-a994-fa163e34d433 0xc000a96027 0xc000a96028}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dgqpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dgqpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-dgqpv true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000a96290} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000a962b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:10:08 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 11:10:11.273: INFO: Pod "nginx-deployment-85ddf47c5d-5g7b9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5g7b9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8qlpf,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8qlpf/pods/nginx-deployment-85ddf47c5d-5g7b9,UID:9b8cb11c-2962-11ea-a994-fa163e34d433,ResourceVersion:16338211,Generation:0,CreationTimestamp:2019-12-28 11:10:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 78fc43c2-2962-11ea-a994-fa163e34d433 0xc000a96327 0xc000a96328}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dgqpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dgqpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-dgqpv true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000a96390} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000a963b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:10:08 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 11:10:11.273: INFO: Pod "nginx-deployment-85ddf47c5d-6gnmb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6gnmb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8qlpf,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8qlpf/pods/nginx-deployment-85ddf47c5d-6gnmb,UID:9b7b354a-2962-11ea-a994-fa163e34d433,ResourceVersion:16338187,Generation:0,CreationTimestamp:2019-12-28 11:10:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 78fc43c2-2962-11ea-a994-fa163e34d433 0xc000a96bf7 0xc000a96bf8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dgqpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dgqpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-dgqpv true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000a96c60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000a96c80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:10:08 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 11:10:11.274: INFO: Pod "nginx-deployment-85ddf47c5d-74sjt" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-74sjt,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8qlpf,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8qlpf/pods/nginx-deployment-85ddf47c5d-74sjt,UID:7917e648-2962-11ea-a994-fa163e34d433,ResourceVersion:16338078,Generation:0,CreationTimestamp:2019-12-28 11:09:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 78fc43c2-2962-11ea-a994-fa163e34d433 0xc000a96cf7 0xc000a96cf8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dgqpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dgqpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-dgqpv true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000a97020} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000a97040}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:09:15 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:09:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:09:51 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:09:10 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.12,StartTime:2019-12-28 11:09:15 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-28 11:09:50 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://42afd4dc386bfe0d42c7dca36c4b9f76c17270a156c24660bca444e5bdb3c16f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 11:10:11.274: INFO: Pod "nginx-deployment-85ddf47c5d-7tdwx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7tdwx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8qlpf,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8qlpf/pods/nginx-deployment-85ddf47c5d-7tdwx,UID:9b8d0267-2962-11ea-a994-fa163e34d433,ResourceVersion:16338206,Generation:0,CreationTimestamp:2019-12-28 11:10:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 78fc43c2-2962-11ea-a994-fa163e34d433 0xc000a97107 0xc000a97108}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dgqpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dgqpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-dgqpv true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000a97170} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000a971a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:10:08 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 11:10:11.274: INFO: Pod "nginx-deployment-85ddf47c5d-9rlp2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9rlp2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8qlpf,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8qlpf/pods/nginx-deployment-85ddf47c5d-9rlp2,UID:9a811c2e-2962-11ea-a994-fa163e34d433,ResourceVersion:16338229,Generation:0,CreationTimestamp:2019-12-28 11:10:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 78fc43c2-2962-11ea-a994-fa163e34d433 0xc000a97347 0xc000a97348}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dgqpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dgqpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-dgqpv true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000a973b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000a973d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:10:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:10:06 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-28 11:10:09 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 11:10:11.274: INFO: Pod "nginx-deployment-85ddf47c5d-cs6zp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-cs6zp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8qlpf,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8qlpf/pods/nginx-deployment-85ddf47c5d-cs6zp,UID:9ac62216-2962-11ea-a994-fa163e34d433,ResourceVersion:16338183,Generation:0,CreationTimestamp:2019-12-28 11:10:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 78fc43c2-2962-11ea-a994-fa163e34d433 0xc000a97567 0xc000a97568}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dgqpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dgqpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-dgqpv true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000a975d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000a975f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:10:08 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 11:10:11.274: INFO: Pod "nginx-deployment-85ddf47c5d-fxtpm" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fxtpm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8qlpf,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8qlpf/pods/nginx-deployment-85ddf47c5d-fxtpm,UID:790ff2f3-2962-11ea-a994-fa163e34d433,ResourceVersion:16338086,Generation:0,CreationTimestamp:2019-12-28 11:09:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 78fc43c2-2962-11ea-a994-fa163e34d433 0xc000a97727 0xc000a97728}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dgqpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dgqpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-dgqpv true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000a97790} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000a977b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:09:13 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:09:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:09:51 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:09:10 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.8,StartTime:2019-12-28 11:09:13 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-28 11:09:49 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://b3c2ac5a7bc0b7913cd28548c194b098a37345d7830a7923a73c9d4d22fe516c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 11:10:11.274: INFO: Pod "nginx-deployment-85ddf47c5d-mrtjw" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-mrtjw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8qlpf,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8qlpf/pods/nginx-deployment-85ddf47c5d-mrtjw,UID:7908210a-2962-11ea-a994-fa163e34d433,ResourceVersion:16338071,Generation:0,CreationTimestamp:2019-12-28 11:09:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 78fc43c2-2962-11ea-a994-fa163e34d433 0xc000a97877 0xc000a97878}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dgqpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dgqpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-dgqpv true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000a978e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000a97900}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:09:10 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:09:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:09:51 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:09:10 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2019-12-28 11:09:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-28 11:09:42 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://2da0f3754ad4831f54157587fcdb81ec4be097ff4d3713822fc310b20313759a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 11:10:11.275: INFO: Pod "nginx-deployment-85ddf47c5d-n7kq2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-n7kq2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8qlpf,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8qlpf/pods/nginx-deployment-85ddf47c5d-n7kq2,UID:9b7bf751-2962-11ea-a994-fa163e34d433,ResourceVersion:16338202,Generation:0,CreationTimestamp:2019-12-28 11:10:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 78fc43c2-2962-11ea-a994-fa163e34d433 0xc000a979c7 0xc000a979c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dgqpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dgqpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-dgqpv true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000a97a30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000a97a50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:10:08 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 11:10:11.275: INFO: Pod "nginx-deployment-85ddf47c5d-n7m4x" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-n7m4x,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8qlpf,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8qlpf/pods/nginx-deployment-85ddf47c5d-n7m4x,UID:9b7bca7d-2962-11ea-a994-fa163e34d433,ResourceVersion:16338199,Generation:0,CreationTimestamp:2019-12-28 11:10:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 78fc43c2-2962-11ea-a994-fa163e34d433 0xc000a97ad7 0xc000a97ad8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dgqpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dgqpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-dgqpv true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000a97b40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000a97b60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:10:08 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 11:10:11.275: INFO: Pod "nginx-deployment-85ddf47c5d-qhj94" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qhj94,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8qlpf,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8qlpf/pods/nginx-deployment-85ddf47c5d-qhj94,UID:790a51e2-2962-11ea-a994-fa163e34d433,ResourceVersion:16338062,Generation:0,CreationTimestamp:2019-12-28 11:09:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 78fc43c2-2962-11ea-a994-fa163e34d433 0xc000a97c57 0xc000a97c58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dgqpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dgqpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-dgqpv true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000a97cc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000a97ce0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:09:10 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:09:50 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:09:50 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:09:10 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2019-12-28 11:09:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-28 11:09:48 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://3c0caece9ae380b235bd69b158e618d0def8a1e54ca1b38962b360d4629f37e4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 11:10:11.275: INFO: Pod "nginx-deployment-85ddf47c5d-spr6n" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-spr6n,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8qlpf,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8qlpf/pods/nginx-deployment-85ddf47c5d-spr6n,UID:9b7b9492-2962-11ea-a994-fa163e34d433,ResourceVersion:16338201,Generation:0,CreationTimestamp:2019-12-28 11:10:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 78fc43c2-2962-11ea-a994-fa163e34d433 0xc000a97da7 0xc000a97da8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dgqpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dgqpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-dgqpv true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000a97e10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000a97e30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:10:08 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 11:10:11.275: INFO: Pod "nginx-deployment-85ddf47c5d-sxxt4" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-sxxt4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8qlpf,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8qlpf/pods/nginx-deployment-85ddf47c5d-sxxt4,UID:791053ae-2962-11ea-a994-fa163e34d433,ResourceVersion:16338068,Generation:0,CreationTimestamp:2019-12-28 11:09:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 78fc43c2-2962-11ea-a994-fa163e34d433 0xc000a97ea7 0xc000a97ea8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dgqpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dgqpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-dgqpv true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000a97f10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000a97f30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:09:11 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:09:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:09:51 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:09:10 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.9,StartTime:2019-12-28 11:09:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-28 11:09:49 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://b8b706c62e40d39b2fc234e4f77be34aa84e8991792d74f6b63c3bf1c494c03c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 11:10:11.275: INFO: Pod "nginx-deployment-85ddf47c5d-vsd7q" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vsd7q,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8qlpf,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8qlpf/pods/nginx-deployment-85ddf47c5d-vsd7q,UID:7918223e-2962-11ea-a994-fa163e34d433,ResourceVersion:16338099,Generation:0,CreationTimestamp:2019-12-28 11:09:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 78fc43c2-2962-11ea-a994-fa163e34d433 0xc000a97ff7 0xc000a97ff8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dgqpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dgqpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-dgqpv true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00114a060} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00114a080}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:09:16 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:09:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:09:51 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:09:10 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.13,StartTime:2019-12-28 11:09:16 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-28 11:09:50 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://ad820ca2bd04a65762903e331ba024947b7a59baa393494186f325f5f146ee70}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 11:10:11.275: INFO: Pod "nginx-deployment-85ddf47c5d-wdd64" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wdd64,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8qlpf,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8qlpf/pods/nginx-deployment-85ddf47c5d-wdd64,UID:9b8be4f6-2962-11ea-a994-fa163e34d433,ResourceVersion:16338209,Generation:0,CreationTimestamp:2019-12-28 11:10:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 78fc43c2-2962-11ea-a994-fa163e34d433 0xc00114a147 0xc00114a148}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dgqpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dgqpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-dgqpv true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00114a1b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00114a1d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:10:08 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 11:10:11.275: INFO: Pod "nginx-deployment-85ddf47c5d-xkhth" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-xkhth,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8qlpf,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8qlpf/pods/nginx-deployment-85ddf47c5d-xkhth,UID:79184cb4-2962-11ea-a994-fa163e34d433,ResourceVersion:16338095,Generation:0,CreationTimestamp:2019-12-28 11:09:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 78fc43c2-2962-11ea-a994-fa163e34d433 0xc00114a247 0xc00114a248}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dgqpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dgqpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-dgqpv true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00114a2b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00114a2f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:09:14 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:09:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:09:51 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:09:10 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.11,StartTime:2019-12-28 11:09:14 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-28 11:09:50 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://72be637b35e4ac9d95240bcbbb6e87a6f2a488be540fdfeadb5eaadf7a005625}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 11:10:11.276: INFO: Pod "nginx-deployment-85ddf47c5d-xt4dm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-xt4dm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8qlpf,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8qlpf/pods/nginx-deployment-85ddf47c5d-xt4dm,UID:9b8c9f70-2962-11ea-a994-fa163e34d433,ResourceVersion:16338215,Generation:0,CreationTimestamp:2019-12-28 11:10:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 78fc43c2-2962-11ea-a994-fa163e34d433 0xc00114a3d7 0xc00114a3d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dgqpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dgqpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-dgqpv true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00114a450} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00114a470}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:10:08 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 11:10:11.276: INFO: Pod "nginx-deployment-85ddf47c5d-z92lq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-z92lq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-8qlpf,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8qlpf/pods/nginx-deployment-85ddf47c5d-z92lq,UID:9b8c802f-2962-11ea-a994-fa163e34d433,ResourceVersion:16338210,Generation:0,CreationTimestamp:2019-12-28 11:10:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 78fc43c2-2962-11ea-a994-fa163e34d433 0xc00114a4e7 0xc00114a4e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dgqpv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dgqpv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-dgqpv true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00114a560} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00114a590}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:10:08 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:10:11.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-8qlpf" for this suite.
Dec 28 11:11:18.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:11:18.646: INFO: namespace: e2e-tests-deployment-8qlpf, resource: bindings, ignored listing per whitelist
Dec 28 11:11:18.698: INFO: namespace e2e-tests-deployment-8qlpf deletion completed in 1m5.259582799s

• [SLOW TEST:128.794 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:11:18.698: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-47r2x.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-47r2x.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-47r2x.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-47r2x.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-47r2x.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-47r2x.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 28 11:11:52.184: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-47r2x/dns-test-c65de621-2962-11ea-8e71-0242ac110005: the server could not find the requested resource (get pods dns-test-c65de621-2962-11ea-8e71-0242ac110005)
Dec 28 11:11:52.187: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-47r2x/dns-test-c65de621-2962-11ea-8e71-0242ac110005: the server could not find the requested resource (get pods dns-test-c65de621-2962-11ea-8e71-0242ac110005)
Dec 28 11:11:52.193: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-47r2x/dns-test-c65de621-2962-11ea-8e71-0242ac110005: the server could not find the requested resource (get pods dns-test-c65de621-2962-11ea-8e71-0242ac110005)
Dec 28 11:11:52.197: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-47r2x/dns-test-c65de621-2962-11ea-8e71-0242ac110005: the server could not find the requested resource (get pods dns-test-c65de621-2962-11ea-8e71-0242ac110005)
Dec 28 11:11:52.200: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-47r2x/dns-test-c65de621-2962-11ea-8e71-0242ac110005: the server could not find the requested resource (get pods dns-test-c65de621-2962-11ea-8e71-0242ac110005)
Dec 28 11:11:52.204: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-47r2x/dns-test-c65de621-2962-11ea-8e71-0242ac110005: the server could not find the requested resource (get pods dns-test-c65de621-2962-11ea-8e71-0242ac110005)
Dec 28 11:11:52.208: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-47r2x.svc.cluster.local from pod e2e-tests-dns-47r2x/dns-test-c65de621-2962-11ea-8e71-0242ac110005: the server could not find the requested resource (get pods dns-test-c65de621-2962-11ea-8e71-0242ac110005)
Dec 28 11:11:52.212: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-47r2x/dns-test-c65de621-2962-11ea-8e71-0242ac110005: the server could not find the requested resource (get pods dns-test-c65de621-2962-11ea-8e71-0242ac110005)
Dec 28 11:11:52.215: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-47r2x/dns-test-c65de621-2962-11ea-8e71-0242ac110005: the server could not find the requested resource (get pods dns-test-c65de621-2962-11ea-8e71-0242ac110005)
Dec 28 11:11:52.218: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-47r2x/dns-test-c65de621-2962-11ea-8e71-0242ac110005: the server could not find the requested resource (get pods dns-test-c65de621-2962-11ea-8e71-0242ac110005)
Dec 28 11:11:52.218: INFO: Lookups using e2e-tests-dns-47r2x/dns-test-c65de621-2962-11ea-8e71-0242ac110005 failed for: [jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-47r2x.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Dec 28 11:11:57.546: INFO: DNS probes using e2e-tests-dns-47r2x/dns-test-c65de621-2962-11ea-8e71-0242ac110005 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:11:57.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-47r2x" for this suite.
Dec 28 11:12:05.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:12:05.922: INFO: namespace: e2e-tests-dns-47r2x, resource: bindings, ignored listing per whitelist
Dec 28 11:12:06.015: INFO: namespace e2e-tests-dns-47r2x deletion completed in 8.414937105s

• [SLOW TEST:47.317 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:12:06.016: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Dec 28 11:12:16.886: INFO: Successfully updated pod "annotationupdatee1eefcd8-2962-11ea-8e71-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:12:19.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-dbzm8" for this suite.
Dec 28 11:12:43.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:12:43.344: INFO: namespace: e2e-tests-downward-api-dbzm8, resource: bindings, ignored listing per whitelist
Dec 28 11:12:43.379: INFO: namespace e2e-tests-downward-api-dbzm8 deletion completed in 24.329855272s

• [SLOW TEST:37.364 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:12:43.379: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override command
Dec 28 11:12:43.596: INFO: Waiting up to 5m0s for pod "client-containers-f832cb13-2962-11ea-8e71-0242ac110005" in namespace "e2e-tests-containers-f4z77" to be "success or failure"
Dec 28 11:12:43.618: INFO: Pod "client-containers-f832cb13-2962-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.524804ms
Dec 28 11:12:45.704: INFO: Pod "client-containers-f832cb13-2962-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107670446s
Dec 28 11:12:47.729: INFO: Pod "client-containers-f832cb13-2962-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.132649946s
Dec 28 11:12:50.834: INFO: Pod "client-containers-f832cb13-2962-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.237755442s
Dec 28 11:12:52.849: INFO: Pod "client-containers-f832cb13-2962-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.252885878s
Dec 28 11:12:54.901: INFO: Pod "client-containers-f832cb13-2962-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.30498025s
STEP: Saw pod success
Dec 28 11:12:54.901: INFO: Pod "client-containers-f832cb13-2962-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 11:12:54.922: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-f832cb13-2962-11ea-8e71-0242ac110005 container test-container: 
STEP: delete the pod
Dec 28 11:12:55.119: INFO: Waiting for pod client-containers-f832cb13-2962-11ea-8e71-0242ac110005 to disappear
Dec 28 11:12:55.129: INFO: Pod client-containers-f832cb13-2962-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:12:55.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-f4z77" for this suite.
Dec 28 11:13:01.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:13:01.350: INFO: namespace: e2e-tests-containers-f4z77, resource: bindings, ignored listing per whitelist
Dec 28 11:13:01.361: INFO: namespace e2e-tests-containers-f4z77 deletion completed in 6.21919126s

• [SLOW TEST:17.982 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:13:01.362: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Dec 28 11:13:01.635: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-87ghn,SelfLink:/api/v1/namespaces/e2e-tests-watch-87ghn/configmaps/e2e-watch-test-watch-closed,UID:02f37f42-2963-11ea-a994-fa163e34d433,ResourceVersion:16338672,Generation:0,CreationTimestamp:2019-12-28 11:13:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 28 11:13:01.635: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-87ghn,SelfLink:/api/v1/namespaces/e2e-tests-watch-87ghn/configmaps/e2e-watch-test-watch-closed,UID:02f37f42-2963-11ea-a994-fa163e34d433,ResourceVersion:16338673,Generation:0,CreationTimestamp:2019-12-28 11:13:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Dec 28 11:13:01.701: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-87ghn,SelfLink:/api/v1/namespaces/e2e-tests-watch-87ghn/configmaps/e2e-watch-test-watch-closed,UID:02f37f42-2963-11ea-a994-fa163e34d433,ResourceVersion:16338674,Generation:0,CreationTimestamp:2019-12-28 11:13:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 28 11:13:01.701: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-87ghn,SelfLink:/api/v1/namespaces/e2e-tests-watch-87ghn/configmaps/e2e-watch-test-watch-closed,UID:02f37f42-2963-11ea-a994-fa163e34d433,ResourceVersion:16338675,Generation:0,CreationTimestamp:2019-12-28 11:13:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:13:01.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-87ghn" for this suite.
Dec 28 11:13:07.746: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:13:07.873: INFO: namespace: e2e-tests-watch-87ghn, resource: bindings, ignored listing per whitelist
Dec 28 11:13:07.886: INFO: namespace e2e-tests-watch-87ghn deletion completed in 6.177254367s

• [SLOW TEST:6.525 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:13:07.886: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Dec 28 11:13:20.783: INFO: Successfully updated pod "labelsupdate06d562d4-2963-11ea-8e71-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:13:22.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-7f28m" for this suite.
Dec 28 11:13:47.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:13:47.162: INFO: namespace: e2e-tests-projected-7f28m, resource: bindings, ignored listing per whitelist
Dec 28 11:13:47.224: INFO: namespace e2e-tests-projected-7f28m deletion completed in 24.246433757s

• [SLOW TEST:39.338 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:13:47.225: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Dec 28 11:14:02.642: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:14:04.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-qmrlw" for this suite.
Dec 28 11:14:30.778: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:14:30.942: INFO: namespace: e2e-tests-replicaset-qmrlw, resource: bindings, ignored listing per whitelist
Dec 28 11:14:30.942: INFO: namespace e2e-tests-replicaset-qmrlw deletion completed in 26.789518212s

• [SLOW TEST:43.718 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:14:30.943: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Dec 28 11:14:31.061: INFO: Waiting up to 5m0s for pod "pod-383f413b-2963-11ea-8e71-0242ac110005" in namespace "e2e-tests-emptydir-6gg25" to be "success or failure"
Dec 28 11:14:31.073: INFO: Pod "pod-383f413b-2963-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.048746ms
Dec 28 11:14:33.241: INFO: Pod "pod-383f413b-2963-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.17976943s
Dec 28 11:14:35.266: INFO: Pod "pod-383f413b-2963-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.2045444s
Dec 28 11:14:37.616: INFO: Pod "pod-383f413b-2963-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.554429022s
Dec 28 11:14:39.628: INFO: Pod "pod-383f413b-2963-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.566866273s
Dec 28 11:14:41.642: INFO: Pod "pod-383f413b-2963-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.580161898s
STEP: Saw pod success
Dec 28 11:14:41.642: INFO: Pod "pod-383f413b-2963-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 11:14:41.646: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-383f413b-2963-11ea-8e71-0242ac110005 container test-container: 
STEP: delete the pod
Dec 28 11:14:42.596: INFO: Waiting for pod pod-383f413b-2963-11ea-8e71-0242ac110005 to disappear
Dec 28 11:14:42.606: INFO: Pod pod-383f413b-2963-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:14:42.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-6gg25" for this suite.
Dec 28 11:14:48.663: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:14:48.702: INFO: namespace: e2e-tests-emptydir-6gg25, resource: bindings, ignored listing per whitelist
Dec 28 11:14:48.785: INFO: namespace e2e-tests-emptydir-6gg25 deletion completed in 6.168406552s

• [SLOW TEST:17.842 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:14:48.785: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 28 11:14:48.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-mv9jt'
Dec 28 11:14:51.097: INFO: stderr: ""
Dec 28 11:14:51.098: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Dec 28 11:15:06.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-mv9jt -o json'
Dec 28 11:15:06.340: INFO: stderr: ""
Dec 28 11:15:06.341: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2019-12-28T11:14:51Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"e2e-tests-kubectl-mv9jt\",\n        \"resourceVersion\": \"16338925\",\n        \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-mv9jt/pods/e2e-test-nginx-pod\",\n        \"uid\": \"442efff8-2963-11ea-a994-fa163e34d433\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-jnz9d\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-jnz9d\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-jnz9d\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-28T11:14:51Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-28T11:15:01Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-28T11:15:01Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-28T11:14:51Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://29a7620941b51a0206e9e6fe3f5f64923689a3fd1be04a794928ef02fba61adb\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2019-12-28T11:15:00Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.1.240\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.32.0.4\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2019-12-28T11:14:51Z\"\n    }\n}\n"
STEP: replace the image in the pod
Dec 28 11:15:06.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-mv9jt'
Dec 28 11:15:06.818: INFO: stderr: ""
Dec 28 11:15:06.818: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568
Dec 28 11:15:06.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-mv9jt'
Dec 28 11:15:15.792: INFO: stderr: ""
Dec 28 11:15:15.792: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:15:15.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-mv9jt" for this suite.
Dec 28 11:15:21.912: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:15:22.088: INFO: namespace: e2e-tests-kubectl-mv9jt, resource: bindings, ignored listing per whitelist
Dec 28 11:15:22.146: INFO: namespace e2e-tests-kubectl-mv9jt deletion completed in 6.279602287s

• [SLOW TEST:33.361 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:15:22.147: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating server pod server in namespace e2e-tests-prestop-hfzj9
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace e2e-tests-prestop-hfzj9
STEP: Deleting pre-stop pod
Dec 28 11:15:51.595: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:15:51.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-prestop-hfzj9" for this suite.
Dec 28 11:16:31.876: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:16:31.971: INFO: namespace: e2e-tests-prestop-hfzj9, resource: bindings, ignored listing per whitelist
Dec 28 11:16:32.047: INFO: namespace e2e-tests-prestop-hfzj9 deletion completed in 40.35666473s

• [SLOW TEST:69.901 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:16:32.048: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-d475
STEP: Creating a pod to test atomic-volume-subpath
Dec 28 11:16:32.374: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-d475" in namespace "e2e-tests-subpath-hbzdb" to be "success or failure"
Dec 28 11:16:32.396: INFO: Pod "pod-subpath-test-projected-d475": Phase="Pending", Reason="", readiness=false. Elapsed: 21.755677ms
Dec 28 11:16:34.419: INFO: Pod "pod-subpath-test-projected-d475": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044914462s
Dec 28 11:16:36.439: INFO: Pod "pod-subpath-test-projected-d475": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065351329s
Dec 28 11:16:39.131: INFO: Pod "pod-subpath-test-projected-d475": Phase="Pending", Reason="", readiness=false. Elapsed: 6.756974474s
Dec 28 11:16:41.142: INFO: Pod "pod-subpath-test-projected-d475": Phase="Pending", Reason="", readiness=false. Elapsed: 8.768194548s
Dec 28 11:16:43.161: INFO: Pod "pod-subpath-test-projected-d475": Phase="Pending", Reason="", readiness=false. Elapsed: 10.786842172s
Dec 28 11:16:45.175: INFO: Pod "pod-subpath-test-projected-d475": Phase="Pending", Reason="", readiness=false. Elapsed: 12.800929205s
Dec 28 11:16:47.191: INFO: Pod "pod-subpath-test-projected-d475": Phase="Pending", Reason="", readiness=false. Elapsed: 14.817636453s
Dec 28 11:16:49.212: INFO: Pod "pod-subpath-test-projected-d475": Phase="Running", Reason="", readiness=false. Elapsed: 16.838546318s
Dec 28 11:16:51.224: INFO: Pod "pod-subpath-test-projected-d475": Phase="Running", Reason="", readiness=false. Elapsed: 18.850463305s
Dec 28 11:16:53.243: INFO: Pod "pod-subpath-test-projected-d475": Phase="Running", Reason="", readiness=false. Elapsed: 20.869458994s
Dec 28 11:16:55.262: INFO: Pod "pod-subpath-test-projected-d475": Phase="Running", Reason="", readiness=false. Elapsed: 22.888303562s
Dec 28 11:16:57.280: INFO: Pod "pod-subpath-test-projected-d475": Phase="Running", Reason="", readiness=false. Elapsed: 24.905773021s
Dec 28 11:16:59.302: INFO: Pod "pod-subpath-test-projected-d475": Phase="Running", Reason="", readiness=false. Elapsed: 26.927696277s
Dec 28 11:17:01.321: INFO: Pod "pod-subpath-test-projected-d475": Phase="Running", Reason="", readiness=false. Elapsed: 28.94702609s
Dec 28 11:17:03.334: INFO: Pod "pod-subpath-test-projected-d475": Phase="Running", Reason="", readiness=false. Elapsed: 30.960042038s
Dec 28 11:17:05.346: INFO: Pod "pod-subpath-test-projected-d475": Phase="Running", Reason="", readiness=false. Elapsed: 32.971690632s
Dec 28 11:17:07.356: INFO: Pod "pod-subpath-test-projected-d475": Phase="Running", Reason="", readiness=false. Elapsed: 34.982139756s
Dec 28 11:17:09.971: INFO: Pod "pod-subpath-test-projected-d475": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.596934125s
STEP: Saw pod success
Dec 28 11:17:09.971: INFO: Pod "pod-subpath-test-projected-d475" satisfied condition "success or failure"
Dec 28 11:17:09.979: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-d475 container test-container-subpath-projected-d475: 
STEP: delete the pod
Dec 28 11:17:10.700: INFO: Waiting for pod pod-subpath-test-projected-d475 to disappear
Dec 28 11:17:10.728: INFO: Pod pod-subpath-test-projected-d475 no longer exists
STEP: Deleting pod pod-subpath-test-projected-d475
Dec 28 11:17:10.728: INFO: Deleting pod "pod-subpath-test-projected-d475" in namespace "e2e-tests-subpath-hbzdb"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:17:10.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-hbzdb" for this suite.
Dec 28 11:17:18.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:17:18.904: INFO: namespace: e2e-tests-subpath-hbzdb, resource: bindings, ignored listing per whitelist
Dec 28 11:17:18.984: INFO: namespace e2e-tests-subpath-hbzdb deletion completed in 8.188592736s

• [SLOW TEST:46.937 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:17:18.984: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
Dec 28 11:17:19.692: INFO: created pod pod-service-account-defaultsa
Dec 28 11:17:19.693: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Dec 28 11:17:19.729: INFO: created pod pod-service-account-mountsa
Dec 28 11:17:19.729: INFO: pod pod-service-account-mountsa service account token volume mount: true
Dec 28 11:17:19.859: INFO: created pod pod-service-account-nomountsa
Dec 28 11:17:19.859: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Dec 28 11:17:19.892: INFO: created pod pod-service-account-defaultsa-mountspec
Dec 28 11:17:19.893: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Dec 28 11:17:19.932: INFO: created pod pod-service-account-mountsa-mountspec
Dec 28 11:17:19.932: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Dec 28 11:17:20.169: INFO: created pod pod-service-account-nomountsa-mountspec
Dec 28 11:17:20.169: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Dec 28 11:17:20.281: INFO: created pod pod-service-account-defaultsa-nomountspec
Dec 28 11:17:20.281: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Dec 28 11:17:20.479: INFO: created pod pod-service-account-mountsa-nomountspec
Dec 28 11:17:20.479: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Dec 28 11:17:21.276: INFO: created pod pod-service-account-nomountsa-nomountspec
Dec 28 11:17:21.276: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:17:21.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-6ghrk" for this suite.
Dec 28 11:17:57.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:17:58.068: INFO: namespace: e2e-tests-svcaccounts-6ghrk, resource: bindings, ignored listing per whitelist
Dec 28 11:17:58.069: INFO: namespace e2e-tests-svcaccounts-6ghrk deletion completed in 35.124096992s

• [SLOW TEST:39.085 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:17:58.069: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-b3d2dcdf-2963-11ea-8e71-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 28 11:17:58.423: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b3d6ed15-2963-11ea-8e71-0242ac110005" in namespace "e2e-tests-projected-jv9xv" to be "success or failure"
Dec 28 11:17:58.664: INFO: Pod "pod-projected-configmaps-b3d6ed15-2963-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 240.905564ms
Dec 28 11:18:00.769: INFO: Pod "pod-projected-configmaps-b3d6ed15-2963-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.346153495s
Dec 28 11:18:02.784: INFO: Pod "pod-projected-configmaps-b3d6ed15-2963-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.36094005s
Dec 28 11:18:04.844: INFO: Pod "pod-projected-configmaps-b3d6ed15-2963-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.421023198s
Dec 28 11:18:06.870: INFO: Pod "pod-projected-configmaps-b3d6ed15-2963-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.447447667s
Dec 28 11:18:08.897: INFO: Pod "pod-projected-configmaps-b3d6ed15-2963-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.473904044s
STEP: Saw pod success
Dec 28 11:18:08.897: INFO: Pod "pod-projected-configmaps-b3d6ed15-2963-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 11:18:08.913: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-b3d6ed15-2963-11ea-8e71-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 28 11:18:09.172: INFO: Waiting for pod pod-projected-configmaps-b3d6ed15-2963-11ea-8e71-0242ac110005 to disappear
Dec 28 11:18:09.179: INFO: Pod pod-projected-configmaps-b3d6ed15-2963-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:18:09.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-jv9xv" for this suite.
Dec 28 11:18:15.226: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:18:15.259: INFO: namespace: e2e-tests-projected-jv9xv, resource: bindings, ignored listing per whitelist
Dec 28 11:18:15.404: INFO: namespace e2e-tests-projected-jv9xv deletion completed in 6.219564848s

• [SLOW TEST:17.335 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:18:15.404: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 28 11:18:15.623: INFO: Waiting up to 5m0s for pod "downwardapi-volume-be188aeb-2963-11ea-8e71-0242ac110005" in namespace "e2e-tests-downward-api-ssmfh" to be "success or failure"
Dec 28 11:18:15.645: INFO: Pod "downwardapi-volume-be188aeb-2963-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.783811ms
Dec 28 11:18:17.857: INFO: Pod "downwardapi-volume-be188aeb-2963-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.234716566s
Dec 28 11:18:19.878: INFO: Pod "downwardapi-volume-be188aeb-2963-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.255038674s
Dec 28 11:18:22.064: INFO: Pod "downwardapi-volume-be188aeb-2963-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.440758459s
Dec 28 11:18:24.083: INFO: Pod "downwardapi-volume-be188aeb-2963-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.460628069s
Dec 28 11:18:26.094: INFO: Pod "downwardapi-volume-be188aeb-2963-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.471461922s
STEP: Saw pod success
Dec 28 11:18:26.094: INFO: Pod "downwardapi-volume-be188aeb-2963-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 11:18:26.099: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-be188aeb-2963-11ea-8e71-0242ac110005 container client-container: 
STEP: delete the pod
Dec 28 11:18:26.768: INFO: Waiting for pod downwardapi-volume-be188aeb-2963-11ea-8e71-0242ac110005 to disappear
Dec 28 11:18:27.043: INFO: Pod downwardapi-volume-be188aeb-2963-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:18:27.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-ssmfh" for this suite.
Dec 28 11:18:33.195: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:18:33.350: INFO: namespace: e2e-tests-downward-api-ssmfh, resource: bindings, ignored listing per whitelist
Dec 28 11:18:33.354: INFO: namespace e2e-tests-downward-api-ssmfh deletion completed in 6.296079862s

• [SLOW TEST:17.949 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:18:33.354: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Dec 28 11:18:41.903: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-c8eed67d-2963-11ea-8e71-0242ac110005,GenerateName:,Namespace:e2e-tests-events-hpj6q,SelfLink:/api/v1/namespaces/e2e-tests-events-hpj6q/pods/send-events-c8eed67d-2963-11ea-8e71-0242ac110005,UID:c8f079f3-2963-11ea-a994-fa163e34d433,ResourceVersion:16339437,Generation:0,CreationTimestamp:2019-12-28 11:18:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 781828826,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-whxqg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-whxqg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-whxqg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f718d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f718f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:18:33 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:18:41 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:18:41 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:18:33 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2019-12-28 11:18:33 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2019-12-28 11:18:41 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://64ba258c5836ffc536c849d5e8b05f65d742fce07868b5d4d8b1d86fc5f783ed}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Dec 28 11:18:43.960: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Dec 28 11:18:46.016: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:18:46.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-hpj6q" for this suite.
Dec 28 11:19:26.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:19:26.246: INFO: namespace: e2e-tests-events-hpj6q, resource: bindings, ignored listing per whitelist
Dec 28 11:19:26.258: INFO: namespace e2e-tests-events-hpj6q deletion completed in 40.203915834s

• [SLOW TEST:52.904 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:19:26.258: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-e85c35af-2963-11ea-8e71-0242ac110005
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:19:40.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-crn4p" for this suite.
Dec 28 11:20:04.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:20:04.837: INFO: namespace: e2e-tests-configmap-crn4p, resource: bindings, ignored listing per whitelist
Dec 28 11:20:04.912: INFO: namespace e2e-tests-configmap-crn4p deletion completed in 24.163369341s

• [SLOW TEST:38.655 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:20:04.913: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 28 11:20:05.061: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:20:15.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-vgp6d" for this suite.
Dec 28 11:21:09.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:21:09.304: INFO: namespace: e2e-tests-pods-vgp6d, resource: bindings, ignored listing per whitelist
Dec 28 11:21:09.512: INFO: namespace e2e-tests-pods-vgp6d deletion completed in 54.364665516s

• [SLOW TEST:64.599 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:21:09.512: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-25fa8cc2-2964-11ea-8e71-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 28 11:21:09.908: INFO: Waiting up to 5m0s for pod "pod-configmaps-25fba677-2964-11ea-8e71-0242ac110005" in namespace "e2e-tests-configmap-87lss" to be "success or failure"
Dec 28 11:21:09.942: INFO: Pod "pod-configmaps-25fba677-2964-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 34.218148ms
Dec 28 11:21:11.958: INFO: Pod "pod-configmaps-25fba677-2964-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050375964s
Dec 28 11:21:13.980: INFO: Pod "pod-configmaps-25fba677-2964-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072407742s
Dec 28 11:21:16.217: INFO: Pod "pod-configmaps-25fba677-2964-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.308819366s
Dec 28 11:21:18.240: INFO: Pod "pod-configmaps-25fba677-2964-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.33206835s
Dec 28 11:21:20.267: INFO: Pod "pod-configmaps-25fba677-2964-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.359419169s
STEP: Saw pod success
Dec 28 11:21:20.267: INFO: Pod "pod-configmaps-25fba677-2964-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 11:21:20.277: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-25fba677-2964-11ea-8e71-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 28 11:21:20.604: INFO: Waiting for pod pod-configmaps-25fba677-2964-11ea-8e71-0242ac110005 to disappear
Dec 28 11:21:20.692: INFO: Pod pod-configmaps-25fba677-2964-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:21:20.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-87lss" for this suite.
Dec 28 11:21:26.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:21:26.917: INFO: namespace: e2e-tests-configmap-87lss, resource: bindings, ignored listing per whitelist
Dec 28 11:21:26.931: INFO: namespace e2e-tests-configmap-87lss deletion completed in 6.144575674s

• [SLOW TEST:17.419 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:21:26.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Dec 28 11:21:27.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-tk96m'
Dec 28 11:21:27.475: INFO: stderr: ""
Dec 28 11:21:27.475: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 28 11:21:27.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tk96m'
Dec 28 11:21:27.559: INFO: stderr: ""
Dec 28 11:21:27.559: INFO: stdout: ""
STEP: Replicas for name=update-demo: expected=2 actual=0
Dec 28 11:21:32.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tk96m'
Dec 28 11:21:32.749: INFO: stderr: ""
Dec 28 11:21:32.749: INFO: stdout: "update-demo-nautilus-gpwtk update-demo-nautilus-jc9k4 "
Dec 28 11:21:32.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gpwtk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tk96m'
Dec 28 11:21:32.892: INFO: stderr: ""
Dec 28 11:21:32.892: INFO: stdout: ""
Dec 28 11:21:32.892: INFO: update-demo-nautilus-gpwtk is created but not running
Dec 28 11:21:37.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tk96m'
Dec 28 11:21:38.032: INFO: stderr: ""
Dec 28 11:21:38.032: INFO: stdout: "update-demo-nautilus-gpwtk update-demo-nautilus-jc9k4 "
Dec 28 11:21:38.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gpwtk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tk96m'
Dec 28 11:21:38.113: INFO: stderr: ""
Dec 28 11:21:38.113: INFO: stdout: ""
Dec 28 11:21:38.113: INFO: update-demo-nautilus-gpwtk is created but not running
Dec 28 11:21:43.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tk96m'
Dec 28 11:21:43.271: INFO: stderr: ""
Dec 28 11:21:43.271: INFO: stdout: "update-demo-nautilus-gpwtk update-demo-nautilus-jc9k4 "
Dec 28 11:21:43.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gpwtk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tk96m'
Dec 28 11:21:43.405: INFO: stderr: ""
Dec 28 11:21:43.405: INFO: stdout: "true"
Dec 28 11:21:43.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gpwtk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tk96m'
Dec 28 11:21:43.608: INFO: stderr: ""
Dec 28 11:21:43.608: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 28 11:21:43.608: INFO: validating pod update-demo-nautilus-gpwtk
Dec 28 11:21:43.667: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 28 11:21:43.667: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 28 11:21:43.667: INFO: update-demo-nautilus-gpwtk is verified up and running
Dec 28 11:21:43.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jc9k4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tk96m'
Dec 28 11:21:43.810: INFO: stderr: ""
Dec 28 11:21:43.810: INFO: stdout: "true"
Dec 28 11:21:43.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jc9k4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tk96m'
Dec 28 11:21:43.925: INFO: stderr: ""
Dec 28 11:21:43.925: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 28 11:21:43.925: INFO: validating pod update-demo-nautilus-jc9k4
Dec 28 11:21:43.957: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 28 11:21:43.957: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 28 11:21:43.957: INFO: update-demo-nautilus-jc9k4 is verified up and running
STEP: rolling-update to new replication controller
Dec 28 11:21:43.963: INFO: scanned /root for discovery docs: 
Dec 28 11:21:43.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-tk96m'
Dec 28 11:22:22.152: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Dec 28 11:22:22.152: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 28 11:22:22.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tk96m'
Dec 28 11:22:22.392: INFO: stderr: ""
Dec 28 11:22:22.392: INFO: stdout: "update-demo-kitten-djkhh update-demo-kitten-mlb7m update-demo-nautilus-jc9k4 "
STEP: Replicas for name=update-demo: expected=2 actual=3
Dec 28 11:22:27.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tk96m'
Dec 28 11:22:27.591: INFO: stderr: ""
Dec 28 11:22:27.591: INFO: stdout: "update-demo-kitten-djkhh update-demo-kitten-mlb7m "
Dec 28 11:22:27.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-djkhh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tk96m'
Dec 28 11:22:27.702: INFO: stderr: ""
Dec 28 11:22:27.702: INFO: stdout: "true"
Dec 28 11:22:27.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-djkhh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tk96m'
Dec 28 11:22:27.814: INFO: stderr: ""
Dec 28 11:22:27.814: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Dec 28 11:22:27.814: INFO: validating pod update-demo-kitten-djkhh
Dec 28 11:22:27.859: INFO: got data: {
  "image": "kitten.jpg"
}

Dec 28 11:22:27.859: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Dec 28 11:22:27.859: INFO: update-demo-kitten-djkhh is verified up and running
Dec 28 11:22:27.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-mlb7m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tk96m'
Dec 28 11:22:27.950: INFO: stderr: ""
Dec 28 11:22:27.950: INFO: stdout: "true"
Dec 28 11:22:27.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-mlb7m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tk96m'
Dec 28 11:22:28.048: INFO: stderr: ""
Dec 28 11:22:28.048: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Dec 28 11:22:28.048: INFO: validating pod update-demo-kitten-mlb7m
Dec 28 11:22:28.057: INFO: got data: {
  "image": "kitten.jpg"
}

Dec 28 11:22:28.057: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Dec 28 11:22:28.057: INFO: update-demo-kitten-mlb7m is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:22:28.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-tk96m" for this suite.
Dec 28 11:22:58.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:22:58.211: INFO: namespace: e2e-tests-kubectl-tk96m, resource: bindings, ignored listing per whitelist
Dec 28 11:22:58.264: INFO: namespace e2e-tests-kubectl-tk96m deletion completed in 30.201109153s

• [SLOW TEST:91.333 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:22:58.264: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 28 11:22:58.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-ll7w7'
Dec 28 11:22:58.649: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 28 11:22:58.649: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Dec 28 11:22:58.657: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Dec 28 11:22:58.676: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Dec 28 11:22:58.776: INFO: scanned /root for discovery docs: 
Dec 28 11:22:58.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-ll7w7'
Dec 28 11:23:26.525: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Dec 28 11:23:26.525: INFO: stdout: "Created e2e-test-nginx-rc-72c36c51d27795b81610207b4cf6369f\nScaling up e2e-test-nginx-rc-72c36c51d27795b81610207b4cf6369f from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-72c36c51d27795b81610207b4cf6369f up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-72c36c51d27795b81610207b4cf6369f to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Dec 28 11:23:26.525: INFO: stdout: "Created e2e-test-nginx-rc-72c36c51d27795b81610207b4cf6369f\nScaling up e2e-test-nginx-rc-72c36c51d27795b81610207b4cf6369f from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-72c36c51d27795b81610207b4cf6369f up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-72c36c51d27795b81610207b4cf6369f to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Dec 28 11:23:26.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-ll7w7'
Dec 28 11:23:26.759: INFO: stderr: ""
Dec 28 11:23:26.759: INFO: stdout: "e2e-test-nginx-rc-72c36c51d27795b81610207b4cf6369f-pw5hf "
Dec 28 11:23:26.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-72c36c51d27795b81610207b4cf6369f-pw5hf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ll7w7'
Dec 28 11:23:26.940: INFO: stderr: ""
Dec 28 11:23:26.940: INFO: stdout: "true"
Dec 28 11:23:26.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-72c36c51d27795b81610207b4cf6369f-pw5hf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ll7w7'
Dec 28 11:23:27.079: INFO: stderr: ""
Dec 28 11:23:27.079: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Dec 28 11:23:27.079: INFO: e2e-test-nginx-rc-72c36c51d27795b81610207b4cf6369f-pw5hf is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Dec 28 11:23:27.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-ll7w7'
Dec 28 11:23:27.234: INFO: stderr: ""
Dec 28 11:23:27.235: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:23:27.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-ll7w7" for this suite.
Dec 28 11:23:35.388: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:23:35.438: INFO: namespace: e2e-tests-kubectl-ll7w7, resource: bindings, ignored listing per whitelist
Dec 28 11:23:35.533: INFO: namespace e2e-tests-kubectl-ll7w7 deletion completed in 8.267793917s

• [SLOW TEST:37.269 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:23:35.533: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Dec 28 11:23:35.702: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 28 11:23:36.062: INFO: Waiting for terminating namespaces to be deleted...
Dec 28 11:23:36.156: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Dec 28 11:23:36.309: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 28 11:23:36.309: INFO: 	Container coredns ready: true, restart count 0
Dec 28 11:23:36.309: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Dec 28 11:23:36.309: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 28 11:23:36.309: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 28 11:23:36.309: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Dec 28 11:23:36.309: INFO: 	Container weave ready: true, restart count 0
Dec 28 11:23:36.309: INFO: 	Container weave-npc ready: true, restart count 0
Dec 28 11:23:36.309: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 28 11:23:36.309: INFO: 	Container coredns ready: true, restart count 0
Dec 28 11:23:36.309: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 28 11:23:36.309: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 28 11:23:36.309: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15e4853163ba352a], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:23:37.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-nkrrq" for this suite.
Dec 28 11:23:43.724: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:23:43.880: INFO: namespace: e2e-tests-sched-pred-nkrrq, resource: bindings, ignored listing per whitelist
Dec 28 11:23:43.909: INFO: namespace e2e-tests-sched-pred-nkrrq deletion completed in 6.220923615s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:8.376 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:23:43.910: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:23:54.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-z97vs" for this suite.
Dec 28 11:24:36.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:24:36.701: INFO: namespace: e2e-tests-kubelet-test-z97vs, resource: bindings, ignored listing per whitelist
Dec 28 11:24:36.701: INFO: namespace e2e-tests-kubelet-test-z97vs deletion completed in 42.317448577s

• [SLOW TEST:52.792 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:24:36.702: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 28 11:24:36.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Dec 28 11:24:37.056: INFO: stderr: ""
Dec 28 11:24:37.056: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:24:37.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-w2vwp" for this suite.
Dec 28 11:24:43.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:24:43.228: INFO: namespace: e2e-tests-kubectl-w2vwp, resource: bindings, ignored listing per whitelist
Dec 28 11:24:43.238: INFO: namespace e2e-tests-kubectl-w2vwp deletion completed in 6.164873209s

• [SLOW TEST:6.537 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:24:43.239: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 28 11:24:43.386: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Dec 28 11:24:43.411: INFO: Pod name sample-pod: Found 0 pods out of 1
Dec 28 11:24:48.472: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 28 11:24:54.509: INFO: Creating deployment "test-rolling-update-deployment"
Dec 28 11:24:54.541: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Dec 28 11:24:54.583: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Dec 28 11:24:56.650: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Dec 28 11:24:56.663: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713129095, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713129095, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713129095, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713129094, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 11:24:58.677: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713129095, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713129095, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713129095, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713129094, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 11:25:00.784: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713129095, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713129095, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713129095, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713129094, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 11:25:02.681: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713129095, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713129095, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713129095, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713129094, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 11:25:05.712: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 28 11:25:06.050: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-nvrcp,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nvrcp/deployments/test-rolling-update-deployment,UID:abde77d0-2964-11ea-a994-fa163e34d433,ResourceVersion:16340285,Generation:1,CreationTimestamp:2019-12-28 11:24:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-28 11:24:55 +0000 UTC 2019-12-28 11:24:55 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-28 11:25:04 +0000 UTC 2019-12-28 11:24:54 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Dec 28 11:25:06.123: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-nvrcp,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nvrcp/replicasets/test-rolling-update-deployment-75db98fb4c,UID:ac0019bd-2964-11ea-a994-fa163e34d433,ResourceVersion:16340276,Generation:1,CreationTimestamp:2019-12-28 11:24:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment abde77d0-2964-11ea-a994-fa163e34d433 0xc0022624e7 0xc0022624e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 28 11:25:06.123: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Dec 28 11:25:06.123: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-nvrcp,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nvrcp/replicasets/test-rolling-update-controller,UID:a53ca1ef-2964-11ea-a994-fa163e34d433,ResourceVersion:16340284,Generation:2,CreationTimestamp:2019-12-28 11:24:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment abde77d0-2964-11ea-a994-fa163e34d433 0xc002262427 0xc002262428}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 28 11:25:06.149: INFO: Pod "test-rolling-update-deployment-75db98fb4c-x24f9" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-x24f9,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-nvrcp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nvrcp/pods/test-rolling-update-deployment-75db98fb4c-x24f9,UID:ac02309a-2964-11ea-a994-fa163e34d433,ResourceVersion:16340275,Generation:0,CreationTimestamp:2019-12-28 11:24:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c ac0019bd-2964-11ea-a994-fa163e34d433 0xc002262dc7 0xc002262dc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pdk6s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pdk6s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-pdk6s true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002262e30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002262e50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:24:54 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:25:03 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:25:03 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:24:54 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2019-12-28 11:24:54 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-28 11:25:02 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://f5d16e1aab2f0506b83b9ff440d20cc4bc3b511ed58374f4f8786de33b08f795}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:25:06.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-nvrcp" for this suite.
Dec 28 11:25:16.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:25:16.382: INFO: namespace: e2e-tests-deployment-nvrcp, resource: bindings, ignored listing per whitelist
Dec 28 11:25:16.382: INFO: namespace e2e-tests-deployment-nvrcp deletion completed in 10.204498956s

• [SLOW TEST:33.143 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:25:16.382: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 28 11:25:16.754: INFO: Waiting up to 5m0s for pod "downward-api-b91b6697-2964-11ea-8e71-0242ac110005" in namespace "e2e-tests-downward-api-czm46" to be "success or failure"
Dec 28 11:25:16.840: INFO: Pod "downward-api-b91b6697-2964-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 85.903725ms
Dec 28 11:25:18.854: INFO: Pod "downward-api-b91b6697-2964-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099813926s
Dec 28 11:25:20.887: INFO: Pod "downward-api-b91b6697-2964-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.133584022s
Dec 28 11:25:22.911: INFO: Pod "downward-api-b91b6697-2964-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.157597711s
Dec 28 11:25:24.947: INFO: Pod "downward-api-b91b6697-2964-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.193023318s
Dec 28 11:25:26.980: INFO: Pod "downward-api-b91b6697-2964-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.226250592s
STEP: Saw pod success
Dec 28 11:25:26.980: INFO: Pod "downward-api-b91b6697-2964-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 11:25:26.989: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-b91b6697-2964-11ea-8e71-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 28 11:25:27.172: INFO: Waiting for pod downward-api-b91b6697-2964-11ea-8e71-0242ac110005 to disappear
Dec 28 11:25:27.181: INFO: Pod downward-api-b91b6697-2964-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:25:27.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-czm46" for this suite.
Dec 28 11:25:33.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:25:33.339: INFO: namespace: e2e-tests-downward-api-czm46, resource: bindings, ignored listing per whitelist
Dec 28 11:25:33.415: INFO: namespace e2e-tests-downward-api-czm46 deletion completed in 6.226704447s

• [SLOW TEST:17.033 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:25:33.415: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-wc4vf
Dec 28 11:25:45.627: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-wc4vf
STEP: checking the pod's current state and verifying that restartCount is present
Dec 28 11:25:45.633: INFO: Initial restart count of pod liveness-http is 0
Dec 28 11:26:09.899: INFO: Restart count of pod e2e-tests-container-probe-wc4vf/liveness-http is now 1 (24.265899028s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:26:10.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-wc4vf" for this suite.
Dec 28 11:26:16.170: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:26:16.321: INFO: namespace: e2e-tests-container-probe-wc4vf, resource: bindings, ignored listing per whitelist
Dec 28 11:26:16.414: INFO: namespace e2e-tests-container-probe-wc4vf deletion completed in 6.304955223s

• [SLOW TEST:42.999 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:26:16.415: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Dec 28 11:26:16.828: INFO: Waiting up to 5m0s for pod "pod-dceb78c3-2964-11ea-8e71-0242ac110005" in namespace "e2e-tests-emptydir-x87r7" to be "success or failure"
Dec 28 11:26:16.839: INFO: Pod "pod-dceb78c3-2964-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.74051ms
Dec 28 11:26:18.864: INFO: Pod "pod-dceb78c3-2964-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035506297s
Dec 28 11:26:20.877: INFO: Pod "pod-dceb78c3-2964-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049081016s
Dec 28 11:26:22.893: INFO: Pod "pod-dceb78c3-2964-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064310869s
Dec 28 11:26:24.959: INFO: Pod "pod-dceb78c3-2964-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.130428983s
Dec 28 11:26:26.977: INFO: Pod "pod-dceb78c3-2964-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.148154545s
STEP: Saw pod success
Dec 28 11:26:26.977: INFO: Pod "pod-dceb78c3-2964-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 11:26:26.982: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-dceb78c3-2964-11ea-8e71-0242ac110005 container test-container: 
STEP: delete the pod
Dec 28 11:26:27.092: INFO: Waiting for pod pod-dceb78c3-2964-11ea-8e71-0242ac110005 to disappear
Dec 28 11:26:27.225: INFO: Pod pod-dceb78c3-2964-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:26:27.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-x87r7" for this suite.
Dec 28 11:26:33.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:26:33.427: INFO: namespace: e2e-tests-emptydir-x87r7, resource: bindings, ignored listing per whitelist
Dec 28 11:26:33.467: INFO: namespace e2e-tests-emptydir-x87r7 deletion completed in 6.231094759s

• [SLOW TEST:17.053 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:26:33.467: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-786wz
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-786wz
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-786wz
Dec 28 11:26:33.898: INFO: Found 0 stateful pods, waiting for 1
Dec 28 11:26:43.916: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Dec 28 11:26:43.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 28 11:26:44.678: INFO: stderr: ""
Dec 28 11:26:44.678: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 28 11:26:44.678: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 28 11:26:44.701: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Dec 28 11:26:54.765: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 28 11:26:54.765: INFO: Waiting for statefulset status.replicas updated to 0
Dec 28 11:26:54.963: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999417s
Dec 28 11:26:55.976: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.879568438s
Dec 28 11:26:57.037: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.866718433s
Dec 28 11:26:58.098: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.806039289s
Dec 28 11:26:59.188: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.744997471s
Dec 28 11:27:00.204: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.65464383s
Dec 28 11:27:01.219: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.638465389s
Dec 28 11:27:02.236: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.623623983s
Dec 28 11:27:03.248: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.606810499s
Dec 28 11:27:04.277: INFO: Verifying statefulset ss doesn't scale past 1 for another 594.985631ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-786wz
Dec 28 11:27:05.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:27:05.870: INFO: stderr: ""
Dec 28 11:27:05.870: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 28 11:27:05.870: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 28 11:27:05.892: INFO: Found 1 stateful pods, waiting for 3
Dec 28 11:27:15.940: INFO: Found 2 stateful pods, waiting for 3
Dec 28 11:27:25.909: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 28 11:27:25.909: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 28 11:27:25.909: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 28 11:27:35.926: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 28 11:27:35.926: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 28 11:27:35.926: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Dec 28 11:27:35.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 28 11:27:36.468: INFO: stderr: ""
Dec 28 11:27:36.468: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 28 11:27:36.468: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 28 11:27:36.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 28 11:27:37.157: INFO: stderr: ""
Dec 28 11:27:37.157: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 28 11:27:37.157: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 28 11:27:37.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 28 11:27:37.733: INFO: stderr: ""
Dec 28 11:27:37.733: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 28 11:27:37.733: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 28 11:27:37.733: INFO: Waiting for statefulset status.replicas updated to 0
Dec 28 11:27:37.750: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Dec 28 11:27:47.775: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 28 11:27:47.775: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Dec 28 11:27:47.775: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Dec 28 11:27:47.892: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999609s
Dec 28 11:27:48.909: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.901961405s
Dec 28 11:27:49.932: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.88488222s
Dec 28 11:27:50.959: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.861924485s
Dec 28 11:27:51.969: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.83483519s
Dec 28 11:27:52.986: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.825326238s
Dec 28 11:27:54.003: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.807721876s
Dec 28 11:27:55.697: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.791313839s
Dec 28 11:27:56.722: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.096756388s
Dec 28 11:27:57.747: INFO: Verifying statefulset ss doesn't scale past 3 for another 71.809064ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-786wz
Dec 28 11:27:58.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:27:59.540: INFO: stderr: ""
Dec 28 11:27:59.540: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 28 11:27:59.540: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 28 11:27:59.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:28:00.132: INFO: stderr: ""
Dec 28 11:28:00.132: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 28 11:28:00.132: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 28 11:28:00.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:28:00.396: INFO: rc: 126
Dec 28 11:28:00.396: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []   cannot exec in a stopped state: unknown
 command terminated with exit code 126
 []  0xc001a7f5c0 exit status 126   true [0xc000a10128 0xc000a10140 0xc000a10158] [0xc000a10128 0xc000a10140 0xc000a10158] [0xc000a10138 0xc000a10150] [0x935700 0x935700] 0xc001dc23c0 }:
Command stdout:
cannot exec in a stopped state: unknown

stderr:
command terminated with exit code 126

error:
exit status 126

Dec 28 11:28:10.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:28:10.660: INFO: rc: 1
Dec 28 11:28:10.660: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc00148ab70 exit status 1   true [0xc00000f8f8 0xc00000f960 0xc00000f988] [0xc00000f8f8 0xc00000f960 0xc00000f988] [0xc00000f950 0xc00000f978] [0x935700 0x935700] 0xc00167c720 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Dec 28 11:28:20.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:28:20.770: INFO: rc: 1
Dec 28 11:28:20.770: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00148ac90 exit status 1   true [0xc00000f9a8 0xc00000fa28 0xc00000faf0] [0xc00000f9a8 0xc00000fa28 0xc00000faf0] [0xc00000fa20 0xc00000faa0] [0x935700 0x935700] 0xc00167ca20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 28 11:28:30.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:28:30.878: INFO: rc: 1
Dec 28 11:28:30.878: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001c11470 exit status 1   true [0xc000230bd8 0xc000230c68 0xc000230cb0] [0xc000230bd8 0xc000230c68 0xc000230cb0] [0xc000230c28 0xc000230c98] [0x935700 0x935700] 0xc002365680 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 28 11:28:40.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:28:41.021: INFO: rc: 1
Dec 28 11:28:41.021: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001c115c0 exit status 1   true [0xc000230cb8 0xc000230d10 0xc000230d80] [0xc000230cb8 0xc000230d10 0xc000230d80] [0xc000230d00 0xc000230d60] [0x935700 0x935700] 0xc002365920 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 28 11:28:51.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:28:51.153: INFO: rc: 1
Dec 28 11:28:51.153: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001a7f800 exit status 1   true [0xc000a10160 0xc000a10178 0xc000a10190] [0xc000a10160 0xc000a10178 0xc000a10190] [0xc000a10170 0xc000a10188] [0x935700 0x935700] 0xc001dc2660 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 28 11:29:01.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:29:01.277: INFO: rc: 1
Dec 28 11:29:01.277: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001a7f950 exit status 1   true [0xc000a10198 0xc000a101b0 0xc000a101c8] [0xc000a10198 0xc000a101b0 0xc000a101c8] [0xc000a101a8 0xc000a101c0] [0x935700 0x935700] 0xc001dc2900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 28 11:29:11.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:29:11.441: INFO: rc: 1
Dec 28 11:29:11.442: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001c11740 exit status 1   true [0xc000230dd8 0xc000230e58 0xc000230ec8] [0xc000230dd8 0xc000230e58 0xc000230ec8] [0xc000230e50 0xc000230ea8] [0x935700 0x935700] 0xc002365bc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 28 11:29:21.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:29:21.597: INFO: rc: 1
Dec 28 11:29:21.597: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000e7a540 exit status 1   true [0xc0002301d0 0xc000230220 0xc0002302f0] [0xc0002301d0 0xc000230220 0xc0002302f0] [0xc000230218 0xc0002302e8] [0x935700 0x935700] 0xc0011cb560 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 28 11:29:31.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:29:31.732: INFO: rc: 1
Dec 28 11:29:31.733: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000e7a870 exit status 1   true [0xc000230318 0xc000230358 0xc0002303f8] [0xc000230318 0xc000230358 0xc0002303f8] [0xc000230338 0xc0002303e8] [0x935700 0x935700] 0xc0011cb800 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 28 11:29:41.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:29:41.895: INFO: rc: 1
Dec 28 11:29:41.895: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000d18180 exit status 1   true [0xc000a10000 0xc000a10018 0xc000a10030] [0xc000a10000 0xc000a10018 0xc000a10030] [0xc000a10010 0xc000a10028] [0x935700 0x935700] 0xc0025102a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 28 11:29:51.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:29:52.041: INFO: rc: 1
Dec 28 11:29:52.041: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00159c120 exit status 1   true [0xc00000e010 0xc00000ec90 0xc00000ef08] [0xc00000e010 0xc00000ec90 0xc00000ef08] [0xc00000ec68 0xc00000eda8] [0x935700 0x935700] 0xc002364240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 28 11:30:02.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:30:02.185: INFO: rc: 1
Dec 28 11:30:02.185: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00159c270 exit status 1   true [0xc00000ef80 0xc00000eff0 0xc00000f0a0] [0xc00000ef80 0xc00000eff0 0xc00000f0a0] [0xc00000efd0 0xc00000f058] [0x935700 0x935700] 0xc0023644e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 28 11:30:12.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:30:12.345: INFO: rc: 1
Dec 28 11:30:12.346: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001c10120 exit status 1   true [0xc001b9c000 0xc001b9c018 0xc001b9c030] [0xc001b9c000 0xc001b9c018 0xc001b9c030] [0xc001b9c010 0xc001b9c028] [0x935700 0x935700] 0xc001dc21e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 28 11:30:22.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:30:22.482: INFO: rc: 1
Dec 28 11:30:22.482: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001c103f0 exit status 1   true [0xc001b9c038 0xc001b9c050 0xc001b9c068] [0xc001b9c038 0xc001b9c050 0xc001b9c068] [0xc001b9c048 0xc001b9c060] [0x935700 0x935700] 0xc001dc2480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 28 11:30:32.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:30:32.610: INFO: rc: 1
Dec 28 11:30:32.610: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00159c390 exit status 1   true [0xc00000f0c0 0xc00000f1d8 0xc00000f5c0] [0xc00000f0c0 0xc00000f1d8 0xc00000f5c0] [0xc00000f1c8 0xc00000f5b8] [0x935700 0x935700] 0xc002364780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 28 11:30:42.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:30:42.737: INFO: rc: 1
Dec 28 11:30:42.738: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00159c4e0 exit status 1   true [0xc00000f5c8 0xc00000f620 0xc00000f680] [0xc00000f5c8 0xc00000f620 0xc00000f680] [0xc00000f600 0xc00000f648] [0x935700 0x935700] 0xc002364a20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 28 11:30:52.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:30:52.842: INFO: rc: 1
Dec 28 11:30:52.842: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00159c600 exit status 1   true [0xc00000f688 0xc00000f708 0xc00000f760] [0xc00000f688 0xc00000f708 0xc00000f760] [0xc00000f6e8 0xc00000f750] [0x935700 0x935700] 0xc002364cc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 28 11:31:02.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:31:02.932: INFO: rc: 1
Dec 28 11:31:02.932: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000e7a9c0 exit status 1   true [0xc000230418 0xc0002304f0 0xc000230570] [0xc000230418 0xc0002304f0 0xc000230570] [0xc000230460 0xc000230568] [0x935700 0x935700] 0xc0011cbaa0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 28 11:31:12.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:31:13.053: INFO: rc: 1
Dec 28 11:31:13.053: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000d18330 exit status 1   true [0xc000a10038 0xc000a10050 0xc000a10068] [0xc000a10038 0xc000a10050 0xc000a10068] [0xc000a10048 0xc000a10060] [0x935700 0x935700] 0xc0025119e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 28 11:31:23.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:31:23.172: INFO: rc: 1
Dec 28 11:31:23.172: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000e7ac00 exit status 1   true [0xc000230588 0xc0002306a8 0xc000230700] [0xc000230588 0xc0002306a8 0xc000230700] [0xc0002305e0 0xc0002306d8] [0x935700 0x935700] 0xc0011cbd40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 28 11:31:33.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:31:33.323: INFO: rc: 1
Dec 28 11:31:33.323: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000d181b0 exit status 1   true [0xc0000ea0f0 0xc000a10010 0xc000a10028] [0xc0000ea0f0 0xc000a10010 0xc000a10028] [0xc000a10008 0xc000a10020] [0x935700 0x935700] 0xc0025102a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 28 11:31:43.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:31:43.473: INFO: rc: 1
Dec 28 11:31:43.473: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001c10150 exit status 1   true [0xc001b9c000 0xc001b9c018 0xc001b9c030] [0xc001b9c000 0xc001b9c018 0xc001b9c030] [0xc001b9c010 0xc001b9c028] [0x935700 0x935700] 0xc001dc21e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 28 11:31:53.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:31:53.640: INFO: rc: 1
Dec 28 11:31:53.640: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000d182d0 exit status 1   true [0xc000a10030 0xc000a10048 0xc000a10060] [0xc000a10030 0xc000a10048 0xc000a10060] [0xc000a10040 0xc000a10058] [0x935700 0x935700] 0xc0025119e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 28 11:32:03.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:32:03.769: INFO: rc: 1
Dec 28 11:32:03.769: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001c10450 exit status 1   true [0xc001b9c038 0xc001b9c050 0xc001b9c068] [0xc001b9c038 0xc001b9c050 0xc001b9c068] [0xc001b9c048 0xc001b9c060] [0x935700 0x935700] 0xc001dc2480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 28 11:32:13.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:32:13.895: INFO: rc: 1
Dec 28 11:32:13.895: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000d184e0 exit status 1   true [0xc000a10068 0xc000a10080 0xc000a10098] [0xc000a10068 0xc000a10080 0xc000a10098] [0xc000a10078 0xc000a10090] [0x935700 0x935700] 0xc002511c80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 28 11:32:23.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:32:24.024: INFO: rc: 1
Dec 28 11:32:24.025: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001c105d0 exit status 1   true [0xc001b9c070 0xc001b9c088 0xc001b9c0a0] [0xc001b9c070 0xc001b9c088 0xc001b9c0a0] [0xc001b9c080 0xc001b9c098] [0x935700 0x935700] 0xc001dc2720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 28 11:32:34.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:32:34.146: INFO: rc: 1
Dec 28 11:32:34.146: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001c10720 exit status 1   true [0xc001b9c0a8 0xc001b9c0c0 0xc001b9c0d8] [0xc001b9c0a8 0xc001b9c0c0 0xc001b9c0d8] [0xc001b9c0b8 0xc001b9c0d0] [0x935700 0x935700] 0xc001dc29c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 28 11:32:44.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:32:44.266: INFO: rc: 1
Dec 28 11:32:44.267: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000d18690 exit status 1   true [0xc000a100a0 0xc000a100b8 0xc000a100d0] [0xc000a100a0 0xc000a100b8 0xc000a100d0] [0xc000a100b0 0xc000a100c8] [0x935700 0x935700] 0xc002511f20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 28 11:32:54.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:32:54.398: INFO: rc: 1
Dec 28 11:32:54.398: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001c107e0 exit status 1   true [0xc000a100d8 0xc001b9c0f0 0xc001b9c108] [0xc000a100d8 0xc001b9c0f0 0xc001b9c108] [0xc001b9c0e8 0xc001b9c100] [0x935700 0x935700] 0xc001dc2c00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 28 11:33:04.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-786wz ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:33:04.584: INFO: rc: 1
Dec 28 11:33:04.584: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: 
Dec 28 11:33:04.584: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 28 11:33:04.612: INFO: Deleting all statefulset in ns e2e-tests-statefulset-786wz
Dec 28 11:33:04.620: INFO: Scaling statefulset ss to 0
Dec 28 11:33:04.638: INFO: Waiting for statefulset status.replicas updated to 0
Dec 28 11:33:04.644: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:33:04.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-786wz" for this suite.
Dec 28 11:33:13.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:33:13.195: INFO: namespace: e2e-tests-statefulset-786wz, resource: bindings, ignored listing per whitelist
Dec 28 11:33:13.248: INFO: namespace e2e-tests-statefulset-786wz deletion completed in 8.462011124s

• [SLOW TEST:399.781 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:33:13.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-nd7nw
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-nd7nw
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-nd7nw
Dec 28 11:33:13.489: INFO: Found 0 stateful pods, waiting for 1
Dec 28 11:33:23.520: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Dec 28 11:33:23.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 28 11:33:24.187: INFO: stderr: ""
Dec 28 11:33:24.187: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 28 11:33:24.187: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 28 11:33:24.224: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Dec 28 11:33:34.273: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 28 11:33:34.273: INFO: Waiting for statefulset status.replicas updated to 0
Dec 28 11:33:34.315: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 28 11:33:34.315: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:13 +0000 UTC  }]
Dec 28 11:33:34.315: INFO: 
Dec 28 11:33:34.315: INFO: StatefulSet ss has not reached scale 3, at 1
Dec 28 11:33:36.531: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.982683678s
Dec 28 11:33:38.242: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.767576569s
Dec 28 11:33:39.409: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.056293817s
Dec 28 11:33:40.440: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.889437727s
Dec 28 11:33:41.462: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.858513821s
Dec 28 11:33:42.647: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.835939845s
Dec 28 11:33:43.894: INFO: Verifying statefulset ss doesn't scale past 3 for another 651.563494ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-nd7nw
Dec 28 11:33:45.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:33:48.155: INFO: stderr: ""
Dec 28 11:33:48.155: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 28 11:33:48.155: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 28 11:33:48.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:33:48.483: INFO: rc: 1
Dec 28 11:33:48.484: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc00148ade0 exit status 1   true [0xc0018f0418 0xc0018f0460 0xc0018f04a8] [0xc0018f0418 0xc0018f0460 0xc0018f04a8] [0xc0018f0450 0xc0018f0498] [0x935700 0x935700] 0xc0017b9440 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Dec 28 11:33:58.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:33:59.082: INFO: stderr: "mv: can't rename '/tmp/index.html': No such file or directory\n"
Dec 28 11:33:59.082: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 28 11:33:59.082: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 28 11:33:59.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:33:59.564: INFO: stderr: "mv: can't rename '/tmp/index.html': No such file or directory\n"
Dec 28 11:33:59.564: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 28 11:33:59.564: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 28 11:33:59.588: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 28 11:33:59.588: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 28 11:33:59.588: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Dec 28 11:33:59.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 28 11:34:00.095: INFO: stderr: ""
Dec 28 11:34:00.095: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 28 11:34:00.095: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 28 11:34:00.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 28 11:34:00.921: INFO: stderr: ""
Dec 28 11:34:00.921: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 28 11:34:00.921: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 28 11:34:00.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 28 11:34:01.354: INFO: stderr: ""
Dec 28 11:34:01.354: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 28 11:34:01.354: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 28 11:34:01.354: INFO: Waiting for statefulset status.replicas updated to 0
Dec 28 11:34:01.369: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Dec 28 11:34:11.396: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 28 11:34:11.396: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Dec 28 11:34:11.396: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Dec 28 11:34:11.444: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 28 11:34:11.444: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:13 +0000 UTC  }]
Dec 28 11:34:11.444: INFO: ss-1  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:34 +0000 UTC  }]
Dec 28 11:34:11.444: INFO: ss-2  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:34 +0000 UTC  }]
Dec 28 11:34:11.444: INFO: 
Dec 28 11:34:11.444: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 28 11:34:12.458: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 28 11:34:12.458: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:13 +0000 UTC  }]
Dec 28 11:34:12.458: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:34 +0000 UTC  }]
Dec 28 11:34:12.458: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:34 +0000 UTC  }]
Dec 28 11:34:12.458: INFO: 
Dec 28 11:34:12.458: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 28 11:34:13.906: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 28 11:34:13.906: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:13 +0000 UTC  }]
Dec 28 11:34:13.906: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:34 +0000 UTC  }]
Dec 28 11:34:13.906: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:34 +0000 UTC  }]
Dec 28 11:34:13.906: INFO: 
Dec 28 11:34:13.906: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 28 11:34:14.932: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 28 11:34:14.932: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:13 +0000 UTC  }]
Dec 28 11:34:14.932: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:34 +0000 UTC  }]
Dec 28 11:34:14.932: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:34 +0000 UTC  }]
Dec 28 11:34:14.932: INFO: 
Dec 28 11:34:14.932: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 28 11:34:15.956: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 28 11:34:15.956: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:13 +0000 UTC  }]
Dec 28 11:34:15.956: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:34 +0000 UTC  }]
Dec 28 11:34:15.956: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:34 +0000 UTC  }]
Dec 28 11:34:15.956: INFO: 
Dec 28 11:34:15.956: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 28 11:34:16.972: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 28 11:34:16.972: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:13 +0000 UTC  }]
Dec 28 11:34:16.972: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:34 +0000 UTC  }]
Dec 28 11:34:16.973: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:34 +0000 UTC  }]
Dec 28 11:34:16.973: INFO: 
Dec 28 11:34:16.973: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 28 11:34:18.660: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 28 11:34:18.661: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:13 +0000 UTC  }]
Dec 28 11:34:18.661: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:34 +0000 UTC  }]
Dec 28 11:34:18.661: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:34 +0000 UTC  }]
Dec 28 11:34:18.661: INFO: 
Dec 28 11:34:18.661: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 28 11:34:20.225: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 28 11:34:20.225: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:13 +0000 UTC  }]
Dec 28 11:34:20.225: INFO: ss-1  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:34 +0000 UTC  }]
Dec 28 11:34:20.225: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:34 +0000 UTC  }]
Dec 28 11:34:20.225: INFO: 
Dec 28 11:34:20.225: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 28 11:34:21.248: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 28 11:34:21.248: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:13 +0000 UTC  }]
Dec 28 11:34:21.248: INFO: ss-1  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:34 +0000 UTC  }]
Dec 28 11:34:21.248: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:34:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 11:33:34 +0000 UTC  }]
Dec 28 11:34:21.248: INFO: 
Dec 28 11:34:21.248: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-nd7nw
Dec 28 11:34:22.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:34:22.671: INFO: rc: 1
Dec 28 11:34:22.671: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc00159c9c0 exit status 1   true [0xc00000fa08 0xc00000fa60 0xc00000fb50] [0xc00000fa08 0xc00000fa60 0xc00000fb50] [0xc00000fa28 0xc00000faf0] [0x935700 0x935700] 0xc001dc2720 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Dec 28 11:34:32.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:34:32.797: INFO: rc: 1
Dec 28 11:34:32.797: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001a7e810 exit status 1   true [0xc000b2e1b8 0xc000b2e1d0 0xc000b2e1e8] [0xc000b2e1b8 0xc000b2e1d0 0xc000b2e1e8] [0xc000b2e1c8 0xc000b2e1e0] [0x935700 0x935700] 0xc002510480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 28 11:34:42.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:34:42.908: INFO: rc: 1
Dec 28 11:34:42.908: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001db00f0 exit status 1   true [0xc00228e048 0xc00228e060 0xc00228e078] [0xc00228e048 0xc00228e060 0xc00228e078] [0xc00228e058 0xc00228e070] [0x935700 0x935700] 0xc0016552c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 28 11:34:52.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:34:53.045: INFO: rc: 1
Dec 28 11:34:53.045: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00159cb10 exit status 1   true [0xc00000fb58 0xc00000fb78 0xc00000fbe8] [0xc00000fb58 0xc00000fb78 0xc00000fbe8] [0xc00000fb68 0xc00000fbe0] [0x935700 0x935700] 0xc001dc29c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 28 11:35:03.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:35:03.197: INFO: rc: 1
Dec 28 11:35:03.198: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001726c90 exit status 1   true [0xc0018f06f0 0xc0018f0730 0xc0018f0758] [0xc0018f06f0 0xc0018f0730 0xc0018f0758] [0xc0018f0710 0xc0018f0750] [0x935700 0x935700] 0xc0011cba40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 28 11:35:13.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:35:13.348: INFO: rc: 1
Dec 28 11:35:13.348: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001726db0 exit status 1   true [0xc0018f0760 0xc0018f0788 0xc0018f07e0] [0xc0018f0760 0xc0018f0788 0xc0018f07e0] [0xc0018f0770 0xc0018f07c8] [0x935700 0x935700] 0xc0011cbce0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 28 11:35:23.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:35:23.492: INFO: rc: 1
Dec 28 11:35:23.492: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00159cc60 exit status 1   true [0xc00000fbf0 0xc00000fc58 0xc00000fc88] [0xc00000fbf0 0xc00000fc58 0xc00000fc88] [0xc00000fc50 0xc00000fc78] [0x935700 0x935700] 0xc001dc2c60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 28 11:35:33.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:35:33.646: INFO: rc: 1
Dec 28 11:35:33.646: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000d18180 exit status 1   true [0xc00000e010 0xc00000ec90 0xc00000ef08] [0xc00000e010 0xc00000ec90 0xc00000ef08] [0xc00000ec68 0xc00000eda8] [0x935700 0x935700] 0xc001e36240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 28 11:35:43.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:35:43.758: INFO: rc: 1
Dec 28 11:35:43.758: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001c10120 exit status 1   true [0xc000b2e008 0xc000b2e020 0xc000b2e038] [0xc000b2e008 0xc000b2e020 0xc000b2e038] [0xc000b2e018 0xc000b2e030] [0x935700 0x935700] 0xc001c66300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 28 11:35:53.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:35:53.881: INFO: rc: 1
Dec 28 11:35:53.882: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000dc8180 exit status 1   true [0xc00228e000 0xc00228e018 0xc00228e030] [0xc00228e000 0xc00228e018 0xc00228e030] [0xc00228e010 0xc00228e028] [0x935700 0x935700] 0xc001b69b60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 28 11:36:03.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:36:03.995: INFO: rc: 1
Dec 28 11:36:03.995: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000dc82d0 exit status 1   true [0xc00228e038 0xc00228e050 0xc00228e068] [0xc00228e038 0xc00228e050 0xc00228e068] [0xc00228e048 0xc00228e060] [0x935700 0x935700] 0xc001900840 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 28 11:36:13.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:36:14.096: INFO: rc: 1
Dec 28 11:36:14.096: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00148a120 exit status 1   true [0xc0018f0000 0xc0018f0020 0xc0018f0078] [0xc0018f0000 0xc0018f0020 0xc0018f0078] [0xc0018f0010 0xc0018f0058] [0x935700 0x935700] 0xc000875c20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 28 11:36:24.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:36:24.174: INFO: rc: 1
Dec 28 11:36:24.174: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000d182a0 exit status 1   true [0xc00000ef80 0xc00000eff0 0xc00000f0a0] [0xc00000ef80 0xc00000eff0 0xc00000f0a0] [0xc00000efd0 0xc00000f058] [0x935700 0x935700] 0xc001e364e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 28 11:36:34.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:36:34.304: INFO: rc: 1
Dec 28 11:36:34.304: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000d184b0 exit status 1   true [0xc00000f0c0 0xc00000f1d8 0xc00000f5c0] [0xc00000f0c0 0xc00000f1d8 0xc00000f5c0] [0xc00000f1c8 0xc00000f5b8] [0x935700 0x935700] 0xc001e37020 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 28 11:36:44.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:36:44.408: INFO: rc: 1
Dec 28 11:36:44.408: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001c102d0 exit status 1   true [0xc000b2e040 0xc000b2e058 0xc000b2e070] [0xc000b2e040 0xc000b2e058 0xc000b2e070] [0xc000b2e050 0xc000b2e068] [0x935700 0x935700] 0xc0013cae40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 28 11:36:54.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:36:54.576: INFO: rc: 1
Dec 28 11:36:54.576: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00148a240 exit status 1   true [0xc0018f0090 0xc0018f00d0 0xc0018f0110] [0xc0018f0090 0xc0018f00d0 0xc0018f0110] [0xc0018f00b8 0xc0018f0108] [0x935700 0x935700] 0xc0017b9380 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 28 11:37:04.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:37:04.707: INFO: rc: 1
Dec 28 11:37:04.707: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000dc8420 exit status 1   true [0xc00228e070 0xc00228e088 0xc00228e0a0] [0xc00228e070 0xc00228e088 0xc00228e0a0] [0xc00228e080 0xc00228e098] [0x935700 0x935700] 0xc00126d980 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 28 11:37:14.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:37:14.831: INFO: rc: 1
Dec 28 11:37:14.831: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00148a360 exit status 1   true [0xc0018f0118 0xc0018f0130 0xc0018f0180] [0xc0018f0118 0xc0018f0130 0xc0018f0180] [0xc0018f0128 0xc0018f0160] [0x935700 0x935700] 0xc001c86d20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 28 11:37:24.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:37:24.959: INFO: rc: 1
Dec 28 11:37:24.959: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0017b60f0 exit status 1   true [0xc0002de128 0xc0002de2b0 0xc0002de328] [0xc0002de128 0xc0002de2b0 0xc0002de328] [0xc0002de220 0xc0002de2c8] [0x935700 0x935700] 0xc001af2b40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 28 11:37:34.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:37:35.105: INFO: rc: 1
Dec 28 11:37:35.105: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0017b6240 exit status 1   true [0xc0000ea0f0 0xc000b2e010 0xc000b2e028] [0xc0000ea0f0 0xc000b2e010 0xc000b2e028] [0xc000b2e008 0xc000b2e020] [0x935700 0x935700] 0xc0017b9380 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 28 11:37:45.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:37:45.207: INFO: rc: 1
Dec 28 11:37:45.207: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0017b6360 exit status 1   true [0xc000b2e030 0xc000b2e048 0xc000b2e060] [0xc000b2e030 0xc000b2e048 0xc000b2e060] [0xc000b2e040 0xc000b2e058] [0x935700 0x935700] 0xc000874600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 28 11:37:55.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:37:55.358: INFO: rc: 1
Dec 28 11:37:55.358: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000d181b0 exit status 1   true [0xc0002de368 0xc0002de450 0xc0002de4b0] [0xc0002de368 0xc0002de450 0xc0002de4b0] [0xc0002de440 0xc0002de480] [0x935700 0x935700] 0xc001901080 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 28 11:38:05.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:38:05.504: INFO: rc: 1
Dec 28 11:38:05.504: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0017b64e0 exit status 1   true [0xc000b2e068 0xc000b2e080 0xc000b2e098] [0xc000b2e068 0xc000b2e080 0xc000b2e098] [0xc000b2e078 0xc000b2e090] [0x935700 0x935700] 0xc001b68c60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 28 11:38:15.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:38:15.725: INFO: rc: 1
Dec 28 11:38:15.725: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000d18330 exit status 1   true [0xc0002de540 0xc0002de5c0 0xc0002de618] [0xc0002de540 0xc0002de5c0 0xc0002de618] [0xc0002de588 0xc0002de608] [0x935700 0x935700] 0xc001c66120 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 28 11:38:25.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:38:25.920: INFO: rc: 1
Dec 28 11:38:25.920: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000dc8120 exit status 1   true [0xc00228e000 0xc00228e018 0xc00228e030] [0xc00228e000 0xc00228e018 0xc00228e030] [0xc00228e010 0xc00228e028] [0x935700 0x935700] 0xc0014b93e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 28 11:38:35.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:38:36.051: INFO: rc: 1
Dec 28 11:38:36.051: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0017b6600 exit status 1   true [0xc000b2e0a8 0xc000b2e0c0 0xc000b2e0d8] [0xc000b2e0a8 0xc000b2e0c0 0xc000b2e0d8] [0xc000b2e0b8 0xc000b2e0d0] [0x935700 0x935700] 0xc001af2f60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 28 11:38:46.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:38:46.186: INFO: rc: 1
Dec 28 11:38:46.186: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0017b6720 exit status 1   true [0xc000b2e0e0 0xc000b2e0f8 0xc000b2e110] [0xc000b2e0e0 0xc000b2e0f8 0xc000b2e110] [0xc000b2e0f0 0xc000b2e108] [0x935700 0x935700] 0xc00126dbc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 28 11:38:56.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:38:56.315: INFO: rc: 1
Dec 28 11:38:56.315: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0017b6870 exit status 1   true [0xc000b2e118 0xc000b2e130 0xc000b2e148] [0xc000b2e118 0xc000b2e130 0xc000b2e148] [0xc000b2e128 0xc000b2e140] [0x935700 0x935700] 0xc001e36240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 28 11:39:06.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:39:06.476: INFO: rc: 1
Dec 28 11:39:06.476: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0017b69c0 exit status 1   true [0xc000b2e150 0xc000b2e168 0xc000b2e180] [0xc000b2e150 0xc000b2e168 0xc000b2e180] [0xc000b2e160 0xc000b2e178] [0x935700 0x935700] 0xc001e364e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 28 11:39:16.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:39:16.574: INFO: rc: 1
Dec 28 11:39:16.574: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001c101e0 exit status 1   true [0xc00000e010 0xc00000ec90 0xc00000ef08] [0xc00000e010 0xc00000ec90 0xc00000ef08] [0xc00000ec68 0xc00000eda8] [0x935700 0x935700] 0xc001c87380 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 28 11:39:26.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nd7nw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 11:39:26.733: INFO: rc: 1
Dec 28 11:39:26.733: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Dec 28 11:39:26.733: INFO: Scaling statefulset ss to 0
Dec 28 11:39:26.777: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 28 11:39:26.782: INFO: Deleting all statefulset in ns e2e-tests-statefulset-nd7nw
Dec 28 11:39:26.785: INFO: Scaling statefulset ss to 0
Dec 28 11:39:26.839: INFO: Waiting for statefulset status.replicas updated to 0
Dec 28 11:39:26.842: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:39:26.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-nd7nw" for this suite.
Dec 28 11:39:34.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:39:34.961: INFO: namespace: e2e-tests-statefulset-nd7nw, resource: bindings, ignored listing per whitelist
Dec 28 11:39:35.067: INFO: namespace e2e-tests-statefulset-nd7nw deletion completed in 8.190868475s

• [SLOW TEST:381.819 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:39:35.067: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-b8d2321d-2966-11ea-8e71-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 28 11:39:35.260: INFO: Waiting up to 5m0s for pod "pod-secrets-b8d3714a-2966-11ea-8e71-0242ac110005" in namespace "e2e-tests-secrets-r5hj6" to be "success or failure"
Dec 28 11:39:35.280: INFO: Pod "pod-secrets-b8d3714a-2966-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.539112ms
Dec 28 11:39:37.318: INFO: Pod "pod-secrets-b8d3714a-2966-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0587213s
Dec 28 11:39:39.341: INFO: Pod "pod-secrets-b8d3714a-2966-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081777765s
Dec 28 11:39:41.646: INFO: Pod "pod-secrets-b8d3714a-2966-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.386339523s
Dec 28 11:39:43.655: INFO: Pod "pod-secrets-b8d3714a-2966-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.395809703s
Dec 28 11:39:45.956: INFO: Pod "pod-secrets-b8d3714a-2966-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.696263226s
Dec 28 11:39:48.112: INFO: Pod "pod-secrets-b8d3714a-2966-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.851956173s
STEP: Saw pod success
Dec 28 11:39:48.112: INFO: Pod "pod-secrets-b8d3714a-2966-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 11:39:48.119: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-b8d3714a-2966-11ea-8e71-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 28 11:39:48.310: INFO: Waiting for pod pod-secrets-b8d3714a-2966-11ea-8e71-0242ac110005 to disappear
Dec 28 11:39:48.329: INFO: Pod pod-secrets-b8d3714a-2966-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:39:48.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-r5hj6" for this suite.
Dec 28 11:39:54.362: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:39:54.545: INFO: namespace: e2e-tests-secrets-r5hj6, resource: bindings, ignored listing per whitelist
Dec 28 11:39:54.579: INFO: namespace e2e-tests-secrets-r5hj6 deletion completed in 6.241779012s

• [SLOW TEST:19.512 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:39:54.579: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:40:01.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-qxlzz" for this suite.
Dec 28 11:40:07.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:40:07.388: INFO: namespace: e2e-tests-namespaces-qxlzz, resource: bindings, ignored listing per whitelist
Dec 28 11:40:07.429: INFO: namespace e2e-tests-namespaces-qxlzz deletion completed in 6.216019133s
STEP: Destroying namespace "e2e-tests-nsdeletetest-vvjmp" for this suite.
Dec 28 11:40:07.434: INFO: Namespace e2e-tests-nsdeletetest-vvjmp was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-8kd8c" for this suite.
Dec 28 11:40:13.508: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:40:13.607: INFO: namespace: e2e-tests-nsdeletetest-8kd8c, resource: bindings, ignored listing per whitelist
Dec 28 11:40:13.715: INFO: namespace e2e-tests-nsdeletetest-8kd8c deletion completed in 6.280774175s

• [SLOW TEST:19.136 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:40:13.715: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:40:26.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-jnxg7" for this suite.
Dec 28 11:40:32.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:40:32.517: INFO: namespace: e2e-tests-kubelet-test-jnxg7, resource: bindings, ignored listing per whitelist
Dec 28 11:40:32.600: INFO: namespace e2e-tests-kubelet-test-jnxg7 deletion completed in 6.557681956s

• [SLOW TEST:18.885 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:40:32.601: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-db31ff97-2966-11ea-8e71-0242ac110005
STEP: Creating secret with name s-test-opt-upd-db32000a-2966-11ea-8e71-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-db31ff97-2966-11ea-8e71-0242ac110005
STEP: Updating secret s-test-opt-upd-db32000a-2966-11ea-8e71-0242ac110005
STEP: Creating secret with name s-test-opt-create-db32003e-2966-11ea-8e71-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:40:53.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-q2g7g" for this suite.
Dec 28 11:41:17.538: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:41:17.858: INFO: namespace: e2e-tests-secrets-q2g7g, resource: bindings, ignored listing per whitelist
Dec 28 11:41:17.909: INFO: namespace e2e-tests-secrets-q2g7g deletion completed in 24.407476277s

• [SLOW TEST:45.308 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:41:17.909: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 28 11:41:18.193: INFO: Waiting up to 5m0s for pod "downward-api-f62e76f4-2966-11ea-8e71-0242ac110005" in namespace "e2e-tests-downward-api-tlv6p" to be "success or failure"
Dec 28 11:41:18.208: INFO: Pod "downward-api-f62e76f4-2966-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.468488ms
Dec 28 11:41:20.227: INFO: Pod "downward-api-f62e76f4-2966-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033555269s
Dec 28 11:41:22.248: INFO: Pod "downward-api-f62e76f4-2966-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054916569s
Dec 28 11:41:24.258: INFO: Pod "downward-api-f62e76f4-2966-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064480407s
Dec 28 11:41:26.273: INFO: Pod "downward-api-f62e76f4-2966-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.079229838s
Dec 28 11:41:28.291: INFO: Pod "downward-api-f62e76f4-2966-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.098184504s
Dec 28 11:41:30.577: INFO: Pod "downward-api-f62e76f4-2966-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.383409404s
STEP: Saw pod success
Dec 28 11:41:30.577: INFO: Pod "downward-api-f62e76f4-2966-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 11:41:30.595: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-f62e76f4-2966-11ea-8e71-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 28 11:41:30.991: INFO: Waiting for pod downward-api-f62e76f4-2966-11ea-8e71-0242ac110005 to disappear
Dec 28 11:41:30.999: INFO: Pod downward-api-f62e76f4-2966-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:41:30.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-tlv6p" for this suite.
Dec 28 11:41:37.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:41:37.168: INFO: namespace: e2e-tests-downward-api-tlv6p, resource: bindings, ignored listing per whitelist
Dec 28 11:41:37.293: INFO: namespace e2e-tests-downward-api-tlv6p deletion completed in 6.283768923s

• [SLOW TEST:19.384 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:41:37.294: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Dec 28 11:41:37.620: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-2xtp6,SelfLink:/api/v1/namespaces/e2e-tests-watch-2xtp6/configmaps/e2e-watch-test-label-changed,UID:01b284fd-2967-11ea-a994-fa163e34d433,ResourceVersion:16342081,Generation:0,CreationTimestamp:2019-12-28 11:41:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 28 11:41:37.620: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-2xtp6,SelfLink:/api/v1/namespaces/e2e-tests-watch-2xtp6/configmaps/e2e-watch-test-label-changed,UID:01b284fd-2967-11ea-a994-fa163e34d433,ResourceVersion:16342082,Generation:0,CreationTimestamp:2019-12-28 11:41:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Dec 28 11:41:37.620: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-2xtp6,SelfLink:/api/v1/namespaces/e2e-tests-watch-2xtp6/configmaps/e2e-watch-test-label-changed,UID:01b284fd-2967-11ea-a994-fa163e34d433,ResourceVersion:16342083,Generation:0,CreationTimestamp:2019-12-28 11:41:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Dec 28 11:41:47.702: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-2xtp6,SelfLink:/api/v1/namespaces/e2e-tests-watch-2xtp6/configmaps/e2e-watch-test-label-changed,UID:01b284fd-2967-11ea-a994-fa163e34d433,ResourceVersion:16342097,Generation:0,CreationTimestamp:2019-12-28 11:41:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 28 11:41:47.702: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-2xtp6,SelfLink:/api/v1/namespaces/e2e-tests-watch-2xtp6/configmaps/e2e-watch-test-label-changed,UID:01b284fd-2967-11ea-a994-fa163e34d433,ResourceVersion:16342098,Generation:0,CreationTimestamp:2019-12-28 11:41:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Dec 28 11:41:47.703: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-2xtp6,SelfLink:/api/v1/namespaces/e2e-tests-watch-2xtp6/configmaps/e2e-watch-test-label-changed,UID:01b284fd-2967-11ea-a994-fa163e34d433,ResourceVersion:16342099,Generation:0,CreationTimestamp:2019-12-28 11:41:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:41:47.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-2xtp6" for this suite.
Dec 28 11:41:53.796: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:41:53.982: INFO: namespace: e2e-tests-watch-2xtp6, resource: bindings, ignored listing per whitelist
Dec 28 11:41:54.004: INFO: namespace e2e-tests-watch-2xtp6 deletion completed in 6.293627065s

• [SLOW TEST:16.711 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:41:54.005: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Dec 28 11:42:04.320: INFO: Pod pod-hostip-0bae9f0d-2967-11ea-8e71-0242ac110005 has hostIP: 10.96.1.240
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:42:04.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-qbcmz" for this suite.
Dec 28 11:42:28.393: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:42:28.545: INFO: namespace: e2e-tests-pods-qbcmz, resource: bindings, ignored listing per whitelist
Dec 28 11:42:28.601: INFO: namespace e2e-tests-pods-qbcmz deletion completed in 24.274843375s

• [SLOW TEST:34.596 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:42:28.602: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 28 11:42:28.812: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2044d182-2967-11ea-8e71-0242ac110005" in namespace "e2e-tests-downward-api-4r4t5" to be "success or failure"
Dec 28 11:42:28.898: INFO: Pod "downwardapi-volume-2044d182-2967-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 86.554409ms
Dec 28 11:42:30.958: INFO: Pod "downwardapi-volume-2044d182-2967-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.146022143s
Dec 28 11:42:32.969: INFO: Pod "downwardapi-volume-2044d182-2967-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157208053s
Dec 28 11:42:35.410: INFO: Pod "downwardapi-volume-2044d182-2967-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.598753823s
Dec 28 11:42:37.447: INFO: Pod "downwardapi-volume-2044d182-2967-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.635236586s
Dec 28 11:42:39.520: INFO: Pod "downwardapi-volume-2044d182-2967-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.708328467s
STEP: Saw pod success
Dec 28 11:42:39.520: INFO: Pod "downwardapi-volume-2044d182-2967-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 11:42:39.525: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-2044d182-2967-11ea-8e71-0242ac110005 container client-container: 
STEP: delete the pod
Dec 28 11:42:40.008: INFO: Waiting for pod downwardapi-volume-2044d182-2967-11ea-8e71-0242ac110005 to disappear
Dec 28 11:42:40.110: INFO: Pod downwardapi-volume-2044d182-2967-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:42:40.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-4r4t5" for this suite.
Dec 28 11:42:47.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:42:47.687: INFO: namespace: e2e-tests-downward-api-4r4t5, resource: bindings, ignored listing per whitelist
Dec 28 11:42:47.814: INFO: namespace e2e-tests-downward-api-4r4t5 deletion completed in 7.692191731s

• [SLOW TEST:19.212 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:42:47.814: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Dec 28 11:42:47.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-j8fzw'
Dec 28 11:42:50.223: INFO: stderr: ""
Dec 28 11:42:50.223: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 28 11:42:50.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-j8fzw'
Dec 28 11:42:50.388: INFO: stderr: ""
Dec 28 11:42:50.388: INFO: stdout: "update-demo-nautilus-7wz47 update-demo-nautilus-wvd86 "
Dec 28 11:42:50.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7wz47 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-j8fzw'
Dec 28 11:42:50.555: INFO: stderr: ""
Dec 28 11:42:50.555: INFO: stdout: ""
Dec 28 11:42:50.555: INFO: update-demo-nautilus-7wz47 is created but not running
Dec 28 11:42:55.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-j8fzw'
Dec 28 11:42:55.698: INFO: stderr: ""
Dec 28 11:42:55.698: INFO: stdout: "update-demo-nautilus-7wz47 update-demo-nautilus-wvd86 "
Dec 28 11:42:55.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7wz47 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-j8fzw'
Dec 28 11:42:55.821: INFO: stderr: ""
Dec 28 11:42:55.821: INFO: stdout: ""
Dec 28 11:42:55.821: INFO: update-demo-nautilus-7wz47 is created but not running
Dec 28 11:43:00.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-j8fzw'
Dec 28 11:43:01.000: INFO: stderr: ""
Dec 28 11:43:01.000: INFO: stdout: "update-demo-nautilus-7wz47 update-demo-nautilus-wvd86 "
Dec 28 11:43:01.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7wz47 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-j8fzw'
Dec 28 11:43:01.167: INFO: stderr: ""
Dec 28 11:43:01.167: INFO: stdout: ""
Dec 28 11:43:01.167: INFO: update-demo-nautilus-7wz47 is created but not running
Dec 28 11:43:06.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-j8fzw'
Dec 28 11:43:06.302: INFO: stderr: ""
Dec 28 11:43:06.302: INFO: stdout: "update-demo-nautilus-7wz47 update-demo-nautilus-wvd86 "
Dec 28 11:43:06.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7wz47 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-j8fzw'
Dec 28 11:43:06.393: INFO: stderr: ""
Dec 28 11:43:06.393: INFO: stdout: "true"
Dec 28 11:43:06.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7wz47 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-j8fzw'
Dec 28 11:43:06.490: INFO: stderr: ""
Dec 28 11:43:06.490: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 28 11:43:06.490: INFO: validating pod update-demo-nautilus-7wz47
Dec 28 11:43:06.524: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 28 11:43:06.524: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 28 11:43:06.524: INFO: update-demo-nautilus-7wz47 is verified up and running
Dec 28 11:43:06.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wvd86 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-j8fzw'
Dec 28 11:43:06.621: INFO: stderr: ""
Dec 28 11:43:06.621: INFO: stdout: "true"
Dec 28 11:43:06.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wvd86 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-j8fzw'
Dec 28 11:43:06.696: INFO: stderr: ""
Dec 28 11:43:06.696: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 28 11:43:06.696: INFO: validating pod update-demo-nautilus-wvd86
Dec 28 11:43:06.710: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 28 11:43:06.711: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 28 11:43:06.711: INFO: update-demo-nautilus-wvd86 is verified up and running
STEP: using delete to clean up resources
Dec 28 11:43:06.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-j8fzw'
Dec 28 11:43:06.817: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 28 11:43:06.817: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Dec 28 11:43:06.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-j8fzw'
Dec 28 11:43:06.958: INFO: stderr: "No resources found.\n"
Dec 28 11:43:06.959: INFO: stdout: ""
Dec 28 11:43:06.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-j8fzw -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 28 11:43:07.101: INFO: stderr: ""
Dec 28 11:43:07.101: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:43:07.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-j8fzw" for this suite.
Dec 28 11:43:33.165: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:43:33.301: INFO: namespace: e2e-tests-kubectl-j8fzw, resource: bindings, ignored listing per whitelist
Dec 28 11:43:33.342: INFO: namespace e2e-tests-kubectl-j8fzw deletion completed in 26.224169521s

• [SLOW TEST:45.528 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:43:33.342: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 28 11:43:33.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client'
Dec 28 11:43:33.664: INFO: stderr: ""
Dec 28 11:43:33.664: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
Dec 28 11:43:33.672: INFO: Not supported for server versions before "1.13.12"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:43:33.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-ftc95" for this suite.
Dec 28 11:43:39.743: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:43:39.783: INFO: namespace: e2e-tests-kubectl-ftc95, resource: bindings, ignored listing per whitelist
Dec 28 11:43:39.893: INFO: namespace e2e-tests-kubectl-ftc95 deletion completed in 6.211824606s

S [SKIPPING] [6.551 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if kubectl describe prints relevant information for rc and pods  [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

    Dec 28 11:43:33.672: Not supported for server versions before "1.13.12"

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:43:39.893: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 28 11:43:40.077: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4abd5313-2967-11ea-8e71-0242ac110005" in namespace "e2e-tests-projected-l6d56" to be "success or failure"
Dec 28 11:43:40.086: INFO: Pod "downwardapi-volume-4abd5313-2967-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.795063ms
Dec 28 11:43:42.493: INFO: Pod "downwardapi-volume-4abd5313-2967-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.416386512s
Dec 28 11:43:44.509: INFO: Pod "downwardapi-volume-4abd5313-2967-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.432035074s
Dec 28 11:43:47.073: INFO: Pod "downwardapi-volume-4abd5313-2967-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.995672821s
Dec 28 11:43:49.087: INFO: Pod "downwardapi-volume-4abd5313-2967-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.009549381s
Dec 28 11:43:51.101: INFO: Pod "downwardapi-volume-4abd5313-2967-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.023806707s
Dec 28 11:43:53.119: INFO: Pod "downwardapi-volume-4abd5313-2967-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.041694815s
STEP: Saw pod success
Dec 28 11:43:53.119: INFO: Pod "downwardapi-volume-4abd5313-2967-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 11:43:53.123: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-4abd5313-2967-11ea-8e71-0242ac110005 container client-container: 
STEP: delete the pod
Dec 28 11:43:53.668: INFO: Waiting for pod downwardapi-volume-4abd5313-2967-11ea-8e71-0242ac110005 to disappear
Dec 28 11:43:54.030: INFO: Pod downwardapi-volume-4abd5313-2967-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:43:54.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-l6d56" for this suite.
Dec 28 11:44:00.081: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:44:00.429: INFO: namespace: e2e-tests-projected-l6d56, resource: bindings, ignored listing per whitelist
Dec 28 11:44:00.494: INFO: namespace e2e-tests-projected-l6d56 deletion completed in 6.454761069s

• [SLOW TEST:20.601 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:44:00.494: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-570bf258-2967-11ea-8e71-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 28 11:44:00.714: INFO: Waiting up to 5m0s for pod "pod-configmaps-570c7844-2967-11ea-8e71-0242ac110005" in namespace "e2e-tests-configmap-9sds6" to be "success or failure"
Dec 28 11:44:00.720: INFO: Pod "pod-configmaps-570c7844-2967-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.792288ms
Dec 28 11:44:02.961: INFO: Pod "pod-configmaps-570c7844-2967-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.246968496s
Dec 28 11:44:04.976: INFO: Pod "pod-configmaps-570c7844-2967-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.26197229s
Dec 28 11:44:07.506: INFO: Pod "pod-configmaps-570c7844-2967-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.792272245s
Dec 28 11:44:09.527: INFO: Pod "pod-configmaps-570c7844-2967-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.81274436s
Dec 28 11:44:11.550: INFO: Pod "pod-configmaps-570c7844-2967-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.835381275s
Dec 28 11:44:13.572: INFO: Pod "pod-configmaps-570c7844-2967-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.858180804s
STEP: Saw pod success
Dec 28 11:44:13.572: INFO: Pod "pod-configmaps-570c7844-2967-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 11:44:13.576: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-570c7844-2967-11ea-8e71-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 28 11:44:14.355: INFO: Waiting for pod pod-configmaps-570c7844-2967-11ea-8e71-0242ac110005 to disappear
Dec 28 11:44:14.390: INFO: Pod pod-configmaps-570c7844-2967-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:44:14.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-9sds6" for this suite.
Dec 28 11:44:22.701: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:44:22.831: INFO: namespace: e2e-tests-configmap-9sds6, resource: bindings, ignored listing per whitelist
Dec 28 11:44:22.967: INFO: namespace e2e-tests-configmap-9sds6 deletion completed in 8.552566826s

• [SLOW TEST:22.473 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:44:22.967: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Dec 28 11:44:23.255: INFO: Waiting up to 5m0s for pod "var-expansion-6475261d-2967-11ea-8e71-0242ac110005" in namespace "e2e-tests-var-expansion-7pmg8" to be "success or failure"
Dec 28 11:44:23.274: INFO: Pod "var-expansion-6475261d-2967-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.955756ms
Dec 28 11:44:25.376: INFO: Pod "var-expansion-6475261d-2967-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120497619s
Dec 28 11:44:27.397: INFO: Pod "var-expansion-6475261d-2967-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.141211804s
Dec 28 11:44:29.555: INFO: Pod "var-expansion-6475261d-2967-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.299095504s
Dec 28 11:44:31.572: INFO: Pod "var-expansion-6475261d-2967-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.316242431s
Dec 28 11:44:34.033: INFO: Pod "var-expansion-6475261d-2967-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.777433914s
STEP: Saw pod success
Dec 28 11:44:34.033: INFO: Pod "var-expansion-6475261d-2967-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 11:44:34.042: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-6475261d-2967-11ea-8e71-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 28 11:44:34.555: INFO: Waiting for pod var-expansion-6475261d-2967-11ea-8e71-0242ac110005 to disappear
Dec 28 11:44:34.594: INFO: Pod var-expansion-6475261d-2967-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:44:34.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-7pmg8" for this suite.
Dec 28 11:44:40.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:44:41.063: INFO: namespace: e2e-tests-var-expansion-7pmg8, resource: bindings, ignored listing per whitelist
Dec 28 11:44:41.066: INFO: namespace e2e-tests-var-expansion-7pmg8 deletion completed in 6.461514061s

• [SLOW TEST:18.099 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:44:41.067: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-6f32a25c-2967-11ea-8e71-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 28 11:44:41.319: INFO: Waiting up to 5m0s for pod "pod-secrets-6f355e74-2967-11ea-8e71-0242ac110005" in namespace "e2e-tests-secrets-hdmqx" to be "success or failure"
Dec 28 11:44:41.364: INFO: Pod "pod-secrets-6f355e74-2967-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 44.604975ms
Dec 28 11:44:43.420: INFO: Pod "pod-secrets-6f355e74-2967-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10058295s
Dec 28 11:44:45.430: INFO: Pod "pod-secrets-6f355e74-2967-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110957439s
Dec 28 11:44:48.215: INFO: Pod "pod-secrets-6f355e74-2967-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.895836451s
Dec 28 11:44:50.234: INFO: Pod "pod-secrets-6f355e74-2967-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.914490022s
Dec 28 11:44:52.255: INFO: Pod "pod-secrets-6f355e74-2967-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.935630478s
Dec 28 11:44:54.722: INFO: Pod "pod-secrets-6f355e74-2967-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.402741339s
STEP: Saw pod success
Dec 28 11:44:54.722: INFO: Pod "pod-secrets-6f355e74-2967-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 11:44:54.781: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-6f355e74-2967-11ea-8e71-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 28 11:44:55.105: INFO: Waiting for pod pod-secrets-6f355e74-2967-11ea-8e71-0242ac110005 to disappear
Dec 28 11:44:55.119: INFO: Pod pod-secrets-6f355e74-2967-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:44:55.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-hdmqx" for this suite.
Dec 28 11:45:01.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:45:01.328: INFO: namespace: e2e-tests-secrets-hdmqx, resource: bindings, ignored listing per whitelist
Dec 28 11:45:01.435: INFO: namespace e2e-tests-secrets-hdmqx deletion completed in 6.305756154s

• [SLOW TEST:20.369 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:45:01.435: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Dec 28 11:45:01.649: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-gzp4s" to be "success or failure"
Dec 28 11:45:01.676: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 26.951574ms
Dec 28 11:45:03.693: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044608886s
Dec 28 11:45:05.701: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052722083s
Dec 28 11:45:07.754: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.104840028s
Dec 28 11:45:09.771: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.122125473s
Dec 28 11:45:11.896: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.246857024s
Dec 28 11:45:13.926: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.277673813s
Dec 28 11:45:15.958: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 14.308926237s
Dec 28 11:45:17.976: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.327718524s
STEP: Saw pod success
Dec 28 11:45:17.977: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Dec 28 11:45:17.985: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Dec 28 11:45:18.102: INFO: Waiting for pod pod-host-path-test to disappear
Dec 28 11:45:18.122: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:45:18.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-gzp4s" for this suite.
Dec 28 11:45:24.185: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:45:24.299: INFO: namespace: e2e-tests-hostpath-gzp4s, resource: bindings, ignored listing per whitelist
Dec 28 11:45:24.337: INFO: namespace e2e-tests-hostpath-gzp4s deletion completed in 6.195758894s

• [SLOW TEST:22.901 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:45:24.337: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:45:24.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-g7bpj" for this suite.
Dec 28 11:45:48.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:45:48.931: INFO: namespace: e2e-tests-pods-g7bpj, resource: bindings, ignored listing per whitelist
Dec 28 11:45:48.987: INFO: namespace e2e-tests-pods-g7bpj deletion completed in 24.38179961s

• [SLOW TEST:24.651 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:45:48.987: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-97ab250c-2967-11ea-8e71-0242ac110005
STEP: Creating secret with name s-test-opt-upd-97ab255f-2967-11ea-8e71-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-97ab250c-2967-11ea-8e71-0242ac110005
STEP: Updating secret s-test-opt-upd-97ab255f-2967-11ea-8e71-0242ac110005
STEP: Creating secret with name s-test-opt-create-97ab258b-2967-11ea-8e71-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:46:07.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-9h8fg" for this suite.
Dec 28 11:46:33.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:46:33.644: INFO: namespace: e2e-tests-projected-9h8fg, resource: bindings, ignored listing per whitelist
Dec 28 11:46:33.690: INFO: namespace e2e-tests-projected-9h8fg deletion completed in 26.293532899s

• [SLOW TEST:44.703 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:46:33.690: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-tqncz/secret-test-b26e0069-2967-11ea-8e71-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 28 11:46:34.057: INFO: Waiting up to 5m0s for pod "pod-configmaps-b26fa5b4-2967-11ea-8e71-0242ac110005" in namespace "e2e-tests-secrets-tqncz" to be "success or failure"
Dec 28 11:46:34.101: INFO: Pod "pod-configmaps-b26fa5b4-2967-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 44.129523ms
Dec 28 11:46:36.122: INFO: Pod "pod-configmaps-b26fa5b4-2967-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064841067s
Dec 28 11:46:38.136: INFO: Pod "pod-configmaps-b26fa5b4-2967-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079056433s
Dec 28 11:46:40.715: INFO: Pod "pod-configmaps-b26fa5b4-2967-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.657881397s
Dec 28 11:46:42.743: INFO: Pod "pod-configmaps-b26fa5b4-2967-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.685681505s
Dec 28 11:46:44.758: INFO: Pod "pod-configmaps-b26fa5b4-2967-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.700793051s
Dec 28 11:46:46.803: INFO: Pod "pod-configmaps-b26fa5b4-2967-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.745656254s
STEP: Saw pod success
Dec 28 11:46:46.803: INFO: Pod "pod-configmaps-b26fa5b4-2967-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 11:46:46.827: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-b26fa5b4-2967-11ea-8e71-0242ac110005 container env-test: 
STEP: delete the pod
Dec 28 11:46:47.036: INFO: Waiting for pod pod-configmaps-b26fa5b4-2967-11ea-8e71-0242ac110005 to disappear
Dec 28 11:46:47.197: INFO: Pod pod-configmaps-b26fa5b4-2967-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:46:47.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-tqncz" for this suite.
Dec 28 11:46:53.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:46:53.297: INFO: namespace: e2e-tests-secrets-tqncz, resource: bindings, ignored listing per whitelist
Dec 28 11:46:53.451: INFO: namespace e2e-tests-secrets-tqncz deletion completed in 6.245579855s

• [SLOW TEST:19.760 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:46:53.451: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Dec 28 11:46:53.680: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Dec 28 11:46:53.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4ngk2'
Dec 28 11:46:54.130: INFO: stderr: ""
Dec 28 11:46:54.130: INFO: stdout: "service/redis-slave created\n"
Dec 28 11:46:54.130: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Dec 28 11:46:54.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4ngk2'
Dec 28 11:46:54.466: INFO: stderr: ""
Dec 28 11:46:54.466: INFO: stdout: "service/redis-master created\n"
Dec 28 11:46:54.466: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Dec 28 11:46:54.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4ngk2'
Dec 28 11:46:54.997: INFO: stderr: ""
Dec 28 11:46:54.998: INFO: stdout: "service/frontend created\n"
Dec 28 11:46:54.998: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Dec 28 11:46:54.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4ngk2'
Dec 28 11:46:55.294: INFO: stderr: ""
Dec 28 11:46:55.294: INFO: stdout: "deployment.extensions/frontend created\n"
Dec 28 11:46:55.295: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Dec 28 11:46:55.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4ngk2'
Dec 28 11:46:55.605: INFO: stderr: ""
Dec 28 11:46:55.605: INFO: stdout: "deployment.extensions/redis-master created\n"
Dec 28 11:46:55.605: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Dec 28 11:46:55.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4ngk2'
Dec 28 11:46:56.004: INFO: stderr: ""
Dec 28 11:46:56.004: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Dec 28 11:46:56.004: INFO: Waiting for all frontend pods to be Running.
Dec 28 11:47:26.055: INFO: Waiting for frontend to serve content.
Dec 28 11:47:27.972: INFO: Trying to add a new entry to the guestbook.
Dec 28 11:47:28.027: INFO: Verifying that added entry can be retrieved.
Dec 28 11:47:28.061: INFO: Failed to get response from guestbook. err: , response: {"data": ""}
STEP: using delete to clean up resources
Dec 28 11:47:33.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-4ngk2'
Dec 28 11:47:33.395: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 28 11:47:33.395: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Dec 28 11:47:33.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-4ngk2'
Dec 28 11:47:33.682: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 28 11:47:33.683: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Dec 28 11:47:33.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-4ngk2'
Dec 28 11:47:33.852: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 28 11:47:33.852: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Dec 28 11:47:33.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-4ngk2'
Dec 28 11:47:34.081: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 28 11:47:34.081: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Dec 28 11:47:34.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-4ngk2'
Dec 28 11:47:34.215: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 28 11:47:34.215: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Dec 28 11:47:34.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-4ngk2'
Dec 28 11:47:34.554: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 28 11:47:34.554: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:47:34.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-4ngk2" for this suite.
Dec 28 11:48:24.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:48:24.829: INFO: namespace: e2e-tests-kubectl-4ngk2, resource: bindings, ignored listing per whitelist
Dec 28 11:48:24.918: INFO: namespace e2e-tests-kubectl-4ngk2 deletion completed in 50.239676072s

• [SLOW TEST:91.467 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:48:24.918: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-f4af1b89-2967-11ea-8e71-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 28 11:48:25.201: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f4b00504-2967-11ea-8e71-0242ac110005" in namespace "e2e-tests-projected-g75mr" to be "success or failure"
Dec 28 11:48:25.287: INFO: Pod "pod-projected-configmaps-f4b00504-2967-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 85.778439ms
Dec 28 11:48:27.299: INFO: Pod "pod-projected-configmaps-f4b00504-2967-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098239769s
Dec 28 11:48:29.317: INFO: Pod "pod-projected-configmaps-f4b00504-2967-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116368604s
Dec 28 11:48:32.113: INFO: Pod "pod-projected-configmaps-f4b00504-2967-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.91208516s
Dec 28 11:48:34.131: INFO: Pod "pod-projected-configmaps-f4b00504-2967-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.930402679s
Dec 28 11:48:36.157: INFO: Pod "pod-projected-configmaps-f4b00504-2967-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.9560797s
STEP: Saw pod success
Dec 28 11:48:36.157: INFO: Pod "pod-projected-configmaps-f4b00504-2967-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 11:48:36.172: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-f4b00504-2967-11ea-8e71-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 28 11:48:36.366: INFO: Waiting for pod pod-projected-configmaps-f4b00504-2967-11ea-8e71-0242ac110005 to disappear
Dec 28 11:48:36.377: INFO: Pod pod-projected-configmaps-f4b00504-2967-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:48:36.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-g75mr" for this suite.
Dec 28 11:48:42.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:48:42.587: INFO: namespace: e2e-tests-projected-g75mr, resource: bindings, ignored listing per whitelist
Dec 28 11:48:42.662: INFO: namespace e2e-tests-projected-g75mr deletion completed in 6.275858384s

• [SLOW TEST:17.744 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:48:42.663: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating replication controller my-hostname-basic-ff41de57-2967-11ea-8e71-0242ac110005
Dec 28 11:48:42.927: INFO: Pod name my-hostname-basic-ff41de57-2967-11ea-8e71-0242ac110005: Found 0 pods out of 1
Dec 28 11:48:49.361: INFO: Pod name my-hostname-basic-ff41de57-2967-11ea-8e71-0242ac110005: Found 1 pods out of 1
Dec 28 11:48:49.361: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-ff41de57-2967-11ea-8e71-0242ac110005" are running
Dec 28 11:48:54.137: INFO: Pod "my-hostname-basic-ff41de57-2967-11ea-8e71-0242ac110005-jrqq9" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 11:48:43 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 11:48:43 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-ff41de57-2967-11ea-8e71-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 11:48:43 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-ff41de57-2967-11ea-8e71-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 11:48:42 +0000 UTC Reason: Message:}])
Dec 28 11:48:54.137: INFO: Trying to dial the pod
Dec 28 11:48:59.254: INFO: Controller my-hostname-basic-ff41de57-2967-11ea-8e71-0242ac110005: Got expected result from replica 1 [my-hostname-basic-ff41de57-2967-11ea-8e71-0242ac110005-jrqq9]: "my-hostname-basic-ff41de57-2967-11ea-8e71-0242ac110005-jrqq9", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:48:59.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-gnv2k" for this suite.
Dec 28 11:49:05.304: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:49:05.409: INFO: namespace: e2e-tests-replication-controller-gnv2k, resource: bindings, ignored listing per whitelist
Dec 28 11:49:05.494: INFO: namespace e2e-tests-replication-controller-gnv2k deletion completed in 6.23076152s

• [SLOW TEST:22.831 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:49:05.494: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Dec 28 11:49:18.216: INFO: Successfully updated pod "pod-update-0cc90624-2968-11ea-8e71-0242ac110005"
STEP: verifying the updated pod is in kubernetes
Dec 28 11:49:18.249: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:49:18.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-6gl8z" for this suite.
Dec 28 11:49:42.424: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:49:42.505: INFO: namespace: e2e-tests-pods-6gl8z, resource: bindings, ignored listing per whitelist
Dec 28 11:49:42.680: INFO: namespace e2e-tests-pods-6gl8z deletion completed in 24.423873537s

• [SLOW TEST:37.186 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:49:42.681: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Dec 28 11:49:43.864: INFO: Pod name wrapped-volume-race-23819495-2968-11ea-8e71-0242ac110005: Found 0 pods out of 5
Dec 28 11:49:48.900: INFO: Pod name wrapped-volume-race-23819495-2968-11ea-8e71-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-23819495-2968-11ea-8e71-0242ac110005 in namespace e2e-tests-emptydir-wrapper-7b6ks, will wait for the garbage collector to delete the pods
Dec 28 11:52:03.180: INFO: Deleting ReplicationController wrapped-volume-race-23819495-2968-11ea-8e71-0242ac110005 took: 23.310234ms
Dec 28 11:52:03.680: INFO: Terminating ReplicationController wrapped-volume-race-23819495-2968-11ea-8e71-0242ac110005 pods took: 500.519351ms
STEP: Creating RC which spawns configmap-volume pods
Dec 28 11:52:53.825: INFO: Pod name wrapped-volume-race-94ab4fd3-2968-11ea-8e71-0242ac110005: Found 0 pods out of 5
Dec 28 11:52:58.880: INFO: Pod name wrapped-volume-race-94ab4fd3-2968-11ea-8e71-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-94ab4fd3-2968-11ea-8e71-0242ac110005 in namespace e2e-tests-emptydir-wrapper-7b6ks, will wait for the garbage collector to delete the pods
Dec 28 11:54:53.032: INFO: Deleting ReplicationController wrapped-volume-race-94ab4fd3-2968-11ea-8e71-0242ac110005 took: 25.048316ms
Dec 28 11:54:53.432: INFO: Terminating ReplicationController wrapped-volume-race-94ab4fd3-2968-11ea-8e71-0242ac110005 pods took: 400.618492ms
STEP: Creating RC which spawns configmap-volume pods
Dec 28 11:55:42.872: INFO: Pod name wrapped-volume-race-f97dd5f9-2968-11ea-8e71-0242ac110005: Found 0 pods out of 5
Dec 28 11:55:47.943: INFO: Pod name wrapped-volume-race-f97dd5f9-2968-11ea-8e71-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-f97dd5f9-2968-11ea-8e71-0242ac110005 in namespace e2e-tests-emptydir-wrapper-7b6ks, will wait for the garbage collector to delete the pods
Dec 28 11:57:30.152: INFO: Deleting ReplicationController wrapped-volume-race-f97dd5f9-2968-11ea-8e71-0242ac110005 took: 40.23057ms
Dec 28 11:57:30.652: INFO: Terminating ReplicationController wrapped-volume-race-f97dd5f9-2968-11ea-8e71-0242ac110005 pods took: 500.67392ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:58:24.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-7b6ks" for this suite.
Dec 28 11:58:34.850: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:58:34.920: INFO: namespace: e2e-tests-emptydir-wrapper-7b6ks, resource: bindings, ignored listing per whitelist
Dec 28 11:58:35.030: INFO: namespace e2e-tests-emptydir-wrapper-7b6ks deletion completed in 10.284216559s

• [SLOW TEST:532.349 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:58:35.030: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Dec 28 11:58:35.305: INFO: Waiting up to 5m0s for pod "var-expansion-60574786-2969-11ea-8e71-0242ac110005" in namespace "e2e-tests-var-expansion-qrrjt" to be "success or failure"
Dec 28 11:58:35.325: INFO: Pod "var-expansion-60574786-2969-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.357276ms
Dec 28 11:58:39.102: INFO: Pod "var-expansion-60574786-2969-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.796656253s
Dec 28 11:58:41.560: INFO: Pod "var-expansion-60574786-2969-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.255213199s
Dec 28 11:58:43.581: INFO: Pod "var-expansion-60574786-2969-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.276013058s
Dec 28 11:58:46.345: INFO: Pod "var-expansion-60574786-2969-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.0403937s
Dec 28 11:58:48.365: INFO: Pod "var-expansion-60574786-2969-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.060444608s
Dec 28 11:58:50.383: INFO: Pod "var-expansion-60574786-2969-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.077852947s
Dec 28 11:58:52.612: INFO: Pod "var-expansion-60574786-2969-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.307566961s
STEP: Saw pod success
Dec 28 11:58:52.613: INFO: Pod "var-expansion-60574786-2969-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 11:58:52.621: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-60574786-2969-11ea-8e71-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 28 11:58:52.801: INFO: Waiting for pod var-expansion-60574786-2969-11ea-8e71-0242ac110005 to disappear
Dec 28 11:58:52.823: INFO: Pod var-expansion-60574786-2969-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:58:52.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-qrrjt" for this suite.
Dec 28 11:58:59.021: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:58:59.072: INFO: namespace: e2e-tests-var-expansion-qrrjt, resource: bindings, ignored listing per whitelist
Dec 28 11:58:59.270: INFO: namespace e2e-tests-var-expansion-qrrjt deletion completed in 6.430340988s

• [SLOW TEST:24.241 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:58:59.271: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Dec 28 11:58:59.616: INFO: Waiting up to 5m0s for pod "var-expansion-6ed4bbe1-2969-11ea-8e71-0242ac110005" in namespace "e2e-tests-var-expansion-2nkj6" to be "success or failure"
Dec 28 11:58:59.709: INFO: Pod "var-expansion-6ed4bbe1-2969-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 92.199178ms
Dec 28 11:59:01.743: INFO: Pod "var-expansion-6ed4bbe1-2969-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126931611s
Dec 28 11:59:03.769: INFO: Pod "var-expansion-6ed4bbe1-2969-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.152603698s
Dec 28 11:59:05.832: INFO: Pod "var-expansion-6ed4bbe1-2969-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.215834125s
Dec 28 11:59:07.863: INFO: Pod "var-expansion-6ed4bbe1-2969-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.24599182s
Dec 28 11:59:09.929: INFO: Pod "var-expansion-6ed4bbe1-2969-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.312119153s
STEP: Saw pod success
Dec 28 11:59:09.929: INFO: Pod "var-expansion-6ed4bbe1-2969-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 11:59:09.949: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-6ed4bbe1-2969-11ea-8e71-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 28 11:59:10.225: INFO: Waiting for pod var-expansion-6ed4bbe1-2969-11ea-8e71-0242ac110005 to disappear
Dec 28 11:59:10.232: INFO: Pod var-expansion-6ed4bbe1-2969-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:59:10.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-2nkj6" for this suite.
Dec 28 11:59:16.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:59:16.574: INFO: namespace: e2e-tests-var-expansion-2nkj6, resource: bindings, ignored listing per whitelist
Dec 28 11:59:16.625: INFO: namespace e2e-tests-var-expansion-2nkj6 deletion completed in 6.379900205s

• [SLOW TEST:17.354 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:59:16.625: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W1228 11:59:20.100689       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 28 11:59:20.100: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:59:20.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-jdphh" for this suite.
Dec 28 11:59:26.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:59:26.427: INFO: namespace: e2e-tests-gc-jdphh, resource: bindings, ignored listing per whitelist
Dec 28 11:59:26.510: INFO: namespace e2e-tests-gc-jdphh deletion completed in 6.393682344s

• [SLOW TEST:9.885 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:59:26.511: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-7f02dcb1-2969-11ea-8e71-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 28 11:59:26.761: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7f0444b0-2969-11ea-8e71-0242ac110005" in namespace "e2e-tests-projected-ptbdg" to be "success or failure"
Dec 28 11:59:26.771: INFO: Pod "pod-projected-configmaps-7f0444b0-2969-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.503649ms
Dec 28 11:59:28.789: INFO: Pod "pod-projected-configmaps-7f0444b0-2969-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027602802s
Dec 28 11:59:30.824: INFO: Pod "pod-projected-configmaps-7f0444b0-2969-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062945475s
Dec 28 11:59:33.731: INFO: Pod "pod-projected-configmaps-7f0444b0-2969-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.969184302s
Dec 28 11:59:35.744: INFO: Pod "pod-projected-configmaps-7f0444b0-2969-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.982508803s
Dec 28 11:59:37.763: INFO: Pod "pod-projected-configmaps-7f0444b0-2969-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.001328756s
Dec 28 11:59:40.373: INFO: Pod "pod-projected-configmaps-7f0444b0-2969-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.61146483s
STEP: Saw pod success
Dec 28 11:59:40.373: INFO: Pod "pod-projected-configmaps-7f0444b0-2969-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 11:59:40.388: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-7f0444b0-2969-11ea-8e71-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 28 11:59:41.327: INFO: Waiting for pod pod-projected-configmaps-7f0444b0-2969-11ea-8e71-0242ac110005 to disappear
Dec 28 11:59:41.334: INFO: Pod pod-projected-configmaps-7f0444b0-2969-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 11:59:41.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-ptbdg" for this suite.
Dec 28 11:59:47.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 11:59:47.478: INFO: namespace: e2e-tests-projected-ptbdg, resource: bindings, ignored listing per whitelist
Dec 28 11:59:47.509: INFO: namespace e2e-tests-projected-ptbdg deletion completed in 6.166110967s

• [SLOW TEST:20.998 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 11:59:47.509: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Dec 28 11:59:47.664: INFO: Waiting up to 5m0s for pod "pod-8b77bce6-2969-11ea-8e71-0242ac110005" in namespace "e2e-tests-emptydir-fvjrk" to be "success or failure"
Dec 28 11:59:47.746: INFO: Pod "pod-8b77bce6-2969-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 81.442269ms
Dec 28 11:59:49.883: INFO: Pod "pod-8b77bce6-2969-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21798378s
Dec 28 11:59:51.898: INFO: Pod "pod-8b77bce6-2969-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.233277948s
Dec 28 11:59:54.286: INFO: Pod "pod-8b77bce6-2969-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.621327358s
Dec 28 11:59:56.461: INFO: Pod "pod-8b77bce6-2969-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.796696565s
Dec 28 11:59:58.568: INFO: Pod "pod-8b77bce6-2969-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.90342888s
Dec 28 12:00:00.781: INFO: Pod "pod-8b77bce6-2969-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.115972913s
STEP: Saw pod success
Dec 28 12:00:00.781: INFO: Pod "pod-8b77bce6-2969-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 12:00:01.133: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-8b77bce6-2969-11ea-8e71-0242ac110005 container test-container: 
STEP: delete the pod
Dec 28 12:00:01.317: INFO: Waiting for pod pod-8b77bce6-2969-11ea-8e71-0242ac110005 to disappear
Dec 28 12:00:01.336: INFO: Pod pod-8b77bce6-2969-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:00:01.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-fvjrk" for this suite.
Dec 28 12:00:07.380: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:00:07.448: INFO: namespace: e2e-tests-emptydir-fvjrk, resource: bindings, ignored listing per whitelist
Dec 28 12:00:07.541: INFO: namespace e2e-tests-emptydir-fvjrk deletion completed in 6.196875411s

• [SLOW TEST:20.032 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:00:07.541: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-p94bq A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-p94bq;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-p94bq A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-p94bq;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-p94bq.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-p94bq.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-p94bq.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-p94bq.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-p94bq.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-p94bq.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-p94bq.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-p94bq.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-p94bq.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-p94bq.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-p94bq.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-p94bq.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-p94bq.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 112.177.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.177.112_udp@PTR;check="$$(dig +tcp +noall +answer +search 112.177.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.177.112_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-p94bq A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-p94bq;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-p94bq A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-p94bq;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-p94bq.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-p94bq.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-p94bq.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-p94bq.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-p94bq.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-p94bq.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-p94bq.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-p94bq.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-p94bq.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-p94bq.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-p94bq.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-p94bq.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-p94bq.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 112.177.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.177.112_udp@PTR;check="$$(dig +tcp +noall +answer +search 112.177.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.177.112_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 28 12:00:26.007: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-p94bq/dns-test-9777fc60-2969-11ea-8e71-0242ac110005: the server could not find the requested resource (get pods dns-test-9777fc60-2969-11ea-8e71-0242ac110005)
Dec 28 12:00:26.012: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-p94bq/dns-test-9777fc60-2969-11ea-8e71-0242ac110005: the server could not find the requested resource (get pods dns-test-9777fc60-2969-11ea-8e71-0242ac110005)
Dec 28 12:00:26.021: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-p94bq from pod e2e-tests-dns-p94bq/dns-test-9777fc60-2969-11ea-8e71-0242ac110005: the server could not find the requested resource (get pods dns-test-9777fc60-2969-11ea-8e71-0242ac110005)
Dec 28 12:00:26.026: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-p94bq from pod e2e-tests-dns-p94bq/dns-test-9777fc60-2969-11ea-8e71-0242ac110005: the server could not find the requested resource (get pods dns-test-9777fc60-2969-11ea-8e71-0242ac110005)
Dec 28 12:00:26.033: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-p94bq.svc from pod e2e-tests-dns-p94bq/dns-test-9777fc60-2969-11ea-8e71-0242ac110005: the server could not find the requested resource (get pods dns-test-9777fc60-2969-11ea-8e71-0242ac110005)
Dec 28 12:00:26.040: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-p94bq.svc from pod e2e-tests-dns-p94bq/dns-test-9777fc60-2969-11ea-8e71-0242ac110005: the server could not find the requested resource (get pods dns-test-9777fc60-2969-11ea-8e71-0242ac110005)
Dec 28 12:00:26.047: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-p94bq.svc from pod e2e-tests-dns-p94bq/dns-test-9777fc60-2969-11ea-8e71-0242ac110005: the server could not find the requested resource (get pods dns-test-9777fc60-2969-11ea-8e71-0242ac110005)
Dec 28 12:00:26.051: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-p94bq.svc from pod e2e-tests-dns-p94bq/dns-test-9777fc60-2969-11ea-8e71-0242ac110005: the server could not find the requested resource (get pods dns-test-9777fc60-2969-11ea-8e71-0242ac110005)
Dec 28 12:00:26.058: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-p94bq.svc from pod e2e-tests-dns-p94bq/dns-test-9777fc60-2969-11ea-8e71-0242ac110005: the server could not find the requested resource (get pods dns-test-9777fc60-2969-11ea-8e71-0242ac110005)
Dec 28 12:00:26.065: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-p94bq.svc from pod e2e-tests-dns-p94bq/dns-test-9777fc60-2969-11ea-8e71-0242ac110005: the server could not find the requested resource (get pods dns-test-9777fc60-2969-11ea-8e71-0242ac110005)
Dec 28 12:00:26.070: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-p94bq/dns-test-9777fc60-2969-11ea-8e71-0242ac110005: the server could not find the requested resource (get pods dns-test-9777fc60-2969-11ea-8e71-0242ac110005)
Dec 28 12:00:26.076: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-p94bq/dns-test-9777fc60-2969-11ea-8e71-0242ac110005: the server could not find the requested resource (get pods dns-test-9777fc60-2969-11ea-8e71-0242ac110005)
Dec 28 12:00:26.081: INFO: Unable to read 10.103.177.112_udp@PTR from pod e2e-tests-dns-p94bq/dns-test-9777fc60-2969-11ea-8e71-0242ac110005: the server could not find the requested resource (get pods dns-test-9777fc60-2969-11ea-8e71-0242ac110005)
Dec 28 12:00:26.087: INFO: Unable to read 10.103.177.112_tcp@PTR from pod e2e-tests-dns-p94bq/dns-test-9777fc60-2969-11ea-8e71-0242ac110005: the server could not find the requested resource (get pods dns-test-9777fc60-2969-11ea-8e71-0242ac110005)
Dec 28 12:00:26.093: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-p94bq/dns-test-9777fc60-2969-11ea-8e71-0242ac110005: the server could not find the requested resource (get pods dns-test-9777fc60-2969-11ea-8e71-0242ac110005)
Dec 28 12:00:26.099: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-p94bq/dns-test-9777fc60-2969-11ea-8e71-0242ac110005: the server could not find the requested resource (get pods dns-test-9777fc60-2969-11ea-8e71-0242ac110005)
Dec 28 12:00:26.106: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-p94bq from pod e2e-tests-dns-p94bq/dns-test-9777fc60-2969-11ea-8e71-0242ac110005: the server could not find the requested resource (get pods dns-test-9777fc60-2969-11ea-8e71-0242ac110005)
Dec 28 12:00:26.113: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-p94bq from pod e2e-tests-dns-p94bq/dns-test-9777fc60-2969-11ea-8e71-0242ac110005: the server could not find the requested resource (get pods dns-test-9777fc60-2969-11ea-8e71-0242ac110005)
Dec 28 12:00:26.120: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-p94bq.svc from pod e2e-tests-dns-p94bq/dns-test-9777fc60-2969-11ea-8e71-0242ac110005: the server could not find the requested resource (get pods dns-test-9777fc60-2969-11ea-8e71-0242ac110005)
Dec 28 12:00:26.125: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-p94bq.svc from pod e2e-tests-dns-p94bq/dns-test-9777fc60-2969-11ea-8e71-0242ac110005: the server could not find the requested resource (get pods dns-test-9777fc60-2969-11ea-8e71-0242ac110005)
Dec 28 12:00:26.135: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-p94bq.svc from pod e2e-tests-dns-p94bq/dns-test-9777fc60-2969-11ea-8e71-0242ac110005: the server could not find the requested resource (get pods dns-test-9777fc60-2969-11ea-8e71-0242ac110005)
Dec 28 12:00:26.143: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-p94bq.svc from pod e2e-tests-dns-p94bq/dns-test-9777fc60-2969-11ea-8e71-0242ac110005: the server could not find the requested resource (get pods dns-test-9777fc60-2969-11ea-8e71-0242ac110005)
Dec 28 12:00:26.150: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-p94bq.svc from pod e2e-tests-dns-p94bq/dns-test-9777fc60-2969-11ea-8e71-0242ac110005: the server could not find the requested resource (get pods dns-test-9777fc60-2969-11ea-8e71-0242ac110005)
Dec 28 12:00:26.156: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-p94bq.svc from pod e2e-tests-dns-p94bq/dns-test-9777fc60-2969-11ea-8e71-0242ac110005: the server could not find the requested resource (get pods dns-test-9777fc60-2969-11ea-8e71-0242ac110005)
Dec 28 12:00:26.161: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-p94bq/dns-test-9777fc60-2969-11ea-8e71-0242ac110005: the server could not find the requested resource (get pods dns-test-9777fc60-2969-11ea-8e71-0242ac110005)
Dec 28 12:00:26.165: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-p94bq/dns-test-9777fc60-2969-11ea-8e71-0242ac110005: the server could not find the requested resource (get pods dns-test-9777fc60-2969-11ea-8e71-0242ac110005)
Dec 28 12:00:26.170: INFO: Unable to read 10.103.177.112_udp@PTR from pod e2e-tests-dns-p94bq/dns-test-9777fc60-2969-11ea-8e71-0242ac110005: the server could not find the requested resource (get pods dns-test-9777fc60-2969-11ea-8e71-0242ac110005)
Dec 28 12:00:26.176: INFO: Unable to read 10.103.177.112_tcp@PTR from pod e2e-tests-dns-p94bq/dns-test-9777fc60-2969-11ea-8e71-0242ac110005: the server could not find the requested resource (get pods dns-test-9777fc60-2969-11ea-8e71-0242ac110005)
Dec 28 12:00:26.176: INFO: Lookups using e2e-tests-dns-p94bq/dns-test-9777fc60-2969-11ea-8e71-0242ac110005 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-p94bq wheezy_tcp@dns-test-service.e2e-tests-dns-p94bq wheezy_udp@dns-test-service.e2e-tests-dns-p94bq.svc wheezy_tcp@dns-test-service.e2e-tests-dns-p94bq.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-p94bq.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-p94bq.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-p94bq.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-p94bq.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.103.177.112_udp@PTR 10.103.177.112_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-p94bq jessie_tcp@dns-test-service.e2e-tests-dns-p94bq jessie_udp@dns-test-service.e2e-tests-dns-p94bq.svc jessie_tcp@dns-test-service.e2e-tests-dns-p94bq.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-p94bq.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-p94bq.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-p94bq.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-p94bq.svc jessie_udp@PodARecord jessie_tcp@PodARecord 10.103.177.112_udp@PTR 10.103.177.112_tcp@PTR]

Dec 28 12:00:31.305: INFO: DNS probes using e2e-tests-dns-p94bq/dns-test-9777fc60-2969-11ea-8e71-0242ac110005 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:00:31.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-p94bq" for this suite.
Dec 28 12:00:39.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:00:39.788: INFO: namespace: e2e-tests-dns-p94bq, resource: bindings, ignored listing per whitelist
Dec 28 12:00:40.009: INFO: namespace e2e-tests-dns-p94bq deletion completed in 8.31043648s

• [SLOW TEST:32.468 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:00:40.009: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-aae8025c-2969-11ea-8e71-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 28 12:00:40.497: INFO: Waiting up to 5m0s for pod "pod-configmaps-aaea95dc-2969-11ea-8e71-0242ac110005" in namespace "e2e-tests-configmap-qckwg" to be "success or failure"
Dec 28 12:00:40.539: INFO: Pod "pod-configmaps-aaea95dc-2969-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 42.549701ms
Dec 28 12:00:42.751: INFO: Pod "pod-configmaps-aaea95dc-2969-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.254072912s
Dec 28 12:00:44.769: INFO: Pod "pod-configmaps-aaea95dc-2969-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.271892937s
Dec 28 12:00:47.711: INFO: Pod "pod-configmaps-aaea95dc-2969-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.214233791s
Dec 28 12:00:49.728: INFO: Pod "pod-configmaps-aaea95dc-2969-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.230762966s
Dec 28 12:00:51.742: INFO: Pod "pod-configmaps-aaea95dc-2969-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.245092466s
STEP: Saw pod success
Dec 28 12:00:51.742: INFO: Pod "pod-configmaps-aaea95dc-2969-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 12:00:51.749: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-aaea95dc-2969-11ea-8e71-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 28 12:00:51.878: INFO: Waiting for pod pod-configmaps-aaea95dc-2969-11ea-8e71-0242ac110005 to disappear
Dec 28 12:00:53.050: INFO: Pod pod-configmaps-aaea95dc-2969-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:00:53.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-qckwg" for this suite.
Dec 28 12:00:59.307: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:00:59.608: INFO: namespace: e2e-tests-configmap-qckwg, resource: bindings, ignored listing per whitelist
Dec 28 12:00:59.655: INFO: namespace e2e-tests-configmap-qckwg deletion completed in 6.585203212s

• [SLOW TEST:19.646 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:00:59.655: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Dec 28 12:00:59.915: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 28 12:00:59.924: INFO: Waiting for terminating namespaces to be deleted...
Dec 28 12:00:59.929: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Dec 28 12:00:59.952: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 28 12:00:59.952: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 28 12:00:59.952: INFO: 	Container coredns ready: true, restart count 0
Dec 28 12:00:59.952: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Dec 28 12:00:59.952: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 28 12:00:59.952: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 28 12:00:59.952: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Dec 28 12:00:59.952: INFO: 	Container weave ready: true, restart count 0
Dec 28 12:00:59.952: INFO: 	Container weave-npc ready: true, restart count 0
Dec 28 12:00:59.952: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 28 12:00:59.952: INFO: 	Container coredns ready: true, restart count 0
Dec 28 12:00:59.952: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 28 12:00:59.952: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-bcb10908-2969-11ea-8e71-0242ac110005 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-bcb10908-2969-11ea-8e71-0242ac110005 off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label kubernetes.io/e2e-bcb10908-2969-11ea-8e71-0242ac110005
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:01:22.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-r9hxl" for this suite.
Dec 28 12:01:36.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:01:36.673: INFO: namespace: e2e-tests-sched-pred-r9hxl, resource: bindings, ignored listing per whitelist
Dec 28 12:01:36.727: INFO: namespace e2e-tests-sched-pred-r9hxl deletion completed in 14.260304014s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:37.072 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:01:36.728: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Dec 28 12:01:36.936: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 28 12:01:36.966: INFO: Waiting for terminating namespaces to be deleted...
Dec 28 12:01:36.970: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Dec 28 12:01:36.987: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 28 12:01:36.987: INFO: 	Container coredns ready: true, restart count 0
Dec 28 12:01:36.987: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 28 12:01:36.987: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 28 12:01:36.987: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 28 12:01:36.987: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 28 12:01:36.987: INFO: 	Container coredns ready: true, restart count 0
Dec 28 12:01:36.987: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Dec 28 12:01:36.987: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 28 12:01:36.987: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 28 12:01:36.987: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Dec 28 12:01:36.987: INFO: 	Container weave ready: true, restart count 0
Dec 28 12:01:36.987: INFO: 	Container weave-npc ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-server-hu5at5svl7ps
Dec 28 12:01:37.112: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Dec 28 12:01:37.112: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Dec 28 12:01:37.112: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Dec 28 12:01:37.112: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps
Dec 28 12:01:37.112: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps
Dec 28 12:01:37.112: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Dec 28 12:01:37.112: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Dec 28 12:01:37.112: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ccb7d18b-2969-11ea-8e71-0242ac110005.15e487445c468589], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-vcmcg/filler-pod-ccb7d18b-2969-11ea-8e71-0242ac110005 to hunter-server-hu5at5svl7ps]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ccb7d18b-2969-11ea-8e71-0242ac110005.15e487457bcf2c40], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ccb7d18b-2969-11ea-8e71-0242ac110005.15e48746572197ac], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ccb7d18b-2969-11ea-8e71-0242ac110005.15e4874684716a29], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15e487472a6278ff], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.]
STEP: removing the label node off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:01:50.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-vcmcg" for this suite.
Dec 28 12:01:56.879: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:01:57.979: INFO: namespace: e2e-tests-sched-pred-vcmcg, resource: bindings, ignored listing per whitelist
Dec 28 12:01:58.005: INFO: namespace e2e-tests-sched-pred-vcmcg deletion completed in 7.641227096s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:21.277 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:01:58.005: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Dec 28 12:02:10.948: INFO: Successfully updated pod "pod-update-activedeadlineseconds-d95a21c2-2969-11ea-8e71-0242ac110005"
Dec 28 12:02:10.949: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-d95a21c2-2969-11ea-8e71-0242ac110005" in namespace "e2e-tests-pods-kztq9" to be "terminated due to deadline exceeded"
Dec 28 12:02:11.011: INFO: Pod "pod-update-activedeadlineseconds-d95a21c2-2969-11ea-8e71-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 62.727585ms
Dec 28 12:02:13.174: INFO: Pod "pod-update-activedeadlineseconds-d95a21c2-2969-11ea-8e71-0242ac110005": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.22501532s
Dec 28 12:02:13.174: INFO: Pod "pod-update-activedeadlineseconds-d95a21c2-2969-11ea-8e71-0242ac110005" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:02:13.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-kztq9" for this suite.
Dec 28 12:02:19.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:02:19.268: INFO: namespace: e2e-tests-pods-kztq9, resource: bindings, ignored listing per whitelist
Dec 28 12:02:19.347: INFO: namespace e2e-tests-pods-kztq9 deletion completed in 6.152831122s

• [SLOW TEST:21.342 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:02:19.347: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Dec 28 12:02:26.673: INFO: 10 pods remaining
Dec 28 12:02:26.673: INFO: 10 pods has nil DeletionTimestamp
Dec 28 12:02:26.673: INFO: 
Dec 28 12:02:27.642: INFO: 9 pods remaining
Dec 28 12:02:27.642: INFO: 5 pods has nil DeletionTimestamp
Dec 28 12:02:27.642: INFO: 
Dec 28 12:02:28.239: INFO: 1 pods remaining
Dec 28 12:02:28.239: INFO: 0 pods has nil DeletionTimestamp
Dec 28 12:02:28.239: INFO: 
STEP: Gathering metrics
W1228 12:02:28.936367       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 28 12:02:28.936: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:02:28.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-t5xpb" for this suite.
Dec 28 12:02:45.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:02:45.497: INFO: namespace: e2e-tests-gc-t5xpb, resource: bindings, ignored listing per whitelist
Dec 28 12:02:45.540: INFO: namespace e2e-tests-gc-t5xpb deletion completed in 16.599534506s

• [SLOW TEST:26.192 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:02:45.540: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-94s4h
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Dec 28 12:02:45.890: INFO: Found 0 stateful pods, waiting for 3
Dec 28 12:02:55.915: INFO: Found 1 stateful pods, waiting for 3
Dec 28 12:03:05.914: INFO: Found 2 stateful pods, waiting for 3
Dec 28 12:03:15.995: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 28 12:03:15.995: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 28 12:03:15.995: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 28 12:03:25.903: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 28 12:03:25.903: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 28 12:03:25.903: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Dec 28 12:03:25.986: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Dec 28 12:03:36.155: INFO: Updating stateful set ss2
Dec 28 12:03:36.172: INFO: Waiting for Pod e2e-tests-statefulset-94s4h/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 28 12:03:46.194: INFO: Waiting for Pod e2e-tests-statefulset-94s4h/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Dec 28 12:03:56.794: INFO: Found 2 stateful pods, waiting for 3
Dec 28 12:04:07.519: INFO: Found 2 stateful pods, waiting for 3
Dec 28 12:04:17.822: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 28 12:04:17.822: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 28 12:04:17.822: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 28 12:04:26.817: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 28 12:04:26.817: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 28 12:04:26.817: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Dec 28 12:04:26.891: INFO: Updating stateful set ss2
Dec 28 12:04:27.096: INFO: Waiting for Pod e2e-tests-statefulset-94s4h/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 28 12:04:37.152: INFO: Updating stateful set ss2
Dec 28 12:04:37.169: INFO: Waiting for StatefulSet e2e-tests-statefulset-94s4h/ss2 to complete update
Dec 28 12:04:37.169: INFO: Waiting for Pod e2e-tests-statefulset-94s4h/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 28 12:04:47.236: INFO: Waiting for StatefulSet e2e-tests-statefulset-94s4h/ss2 to complete update
Dec 28 12:04:47.236: INFO: Waiting for Pod e2e-tests-statefulset-94s4h/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 28 12:04:57.202: INFO: Waiting for StatefulSet e2e-tests-statefulset-94s4h/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 28 12:05:07.195: INFO: Deleting all statefulset in ns e2e-tests-statefulset-94s4h
Dec 28 12:05:07.202: INFO: Scaling statefulset ss2 to 0
Dec 28 12:05:47.248: INFO: Waiting for statefulset status.replicas updated to 0
Dec 28 12:05:47.256: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:05:47.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-94s4h" for this suite.
Dec 28 12:05:55.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:05:55.502: INFO: namespace: e2e-tests-statefulset-94s4h, resource: bindings, ignored listing per whitelist
Dec 28 12:05:55.782: INFO: namespace e2e-tests-statefulset-94s4h deletion completed in 8.484163716s

• [SLOW TEST:190.243 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:05:55.783: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Dec 28 12:05:56.084: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:06:13.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-l445h" for this suite.
Dec 28 12:06:19.996: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:06:20.193: INFO: namespace: e2e-tests-init-container-l445h, resource: bindings, ignored listing per whitelist
Dec 28 12:06:20.210: INFO: namespace e2e-tests-init-container-l445h deletion completed in 6.249410564s

• [SLOW TEST:24.427 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:06:20.210: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-nhhj2
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 28 12:06:20.405: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 28 12:06:54.642: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-nhhj2 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 28 12:06:54.642: INFO: >>> kubeConfig: /root/.kube/config
Dec 28 12:06:55.129: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:06:55.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-nhhj2" for this suite.
Dec 28 12:07:21.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:07:21.299: INFO: namespace: e2e-tests-pod-network-test-nhhj2, resource: bindings, ignored listing per whitelist
Dec 28 12:07:21.375: INFO: namespace e2e-tests-pod-network-test-nhhj2 deletion completed in 26.175061846s

• [SLOW TEST:61.165 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:07:21.375: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 28 12:07:21.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-4cs4n'
Dec 28 12:07:23.759: INFO: stderr: ""
Dec 28 12:07:23.759: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532
Dec 28 12:07:23.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-4cs4n'
Dec 28 12:07:30.710: INFO: stderr: ""
Dec 28 12:07:30.710: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:07:30.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-4cs4n" for this suite.
Dec 28 12:07:36.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:07:36.802: INFO: namespace: e2e-tests-kubectl-4cs4n, resource: bindings, ignored listing per whitelist
Dec 28 12:07:36.998: INFO: namespace e2e-tests-kubectl-4cs4n deletion completed in 6.277548722s

• [SLOW TEST:15.623 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:07:36.998: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Dec 28 12:07:49.500: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-a35b0f05-296a-11ea-8e71-0242ac110005", GenerateName:"", Namespace:"e2e-tests-pods-rppk5", SelfLink:"/api/v1/namespaces/e2e-tests-pods-rppk5/pods/pod-submit-remove-a35b0f05-296a-11ea-8e71-0242ac110005", UID:"a36c5cad-296a-11ea-a994-fa163e34d433", ResourceVersion:"16345710", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63713131657, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"time":"214759370", "name":"foo"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-xjchm", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0011d2d00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-xjchm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0019418a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001655260), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001941980)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0019419a0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0019419a8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0019419ac)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713131657, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713131667, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713131667, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713131657, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc0017f1360), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0017f1380), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://0cf135bb127ddf589b05298265a75d77e004a46dedf3416b18709a2d72825f52"}}, QOSClass:"BestEffort"}}
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:08:02.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-rppk5" for this suite.
Dec 28 12:08:08.679: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:08:08.784: INFO: namespace: e2e-tests-pods-rppk5, resource: bindings, ignored listing per whitelist
Dec 28 12:08:08.798: INFO: namespace e2e-tests-pods-rppk5 deletion completed in 6.154689054s

• [SLOW TEST:31.800 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:08:08.798: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-5vfqr
Dec 28 12:08:19.154: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-5vfqr
STEP: checking the pod's current state and verifying that restartCount is present
Dec 28 12:08:19.160: INFO: Initial restart count of pod liveness-exec is 0
Dec 28 12:09:16.167: INFO: Restart count of pod e2e-tests-container-probe-5vfqr/liveness-exec is now 1 (57.007088705s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:09:16.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-5vfqr" for this suite.
Dec 28 12:09:22.362: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:09:22.568: INFO: namespace: e2e-tests-container-probe-5vfqr, resource: bindings, ignored listing per whitelist
Dec 28 12:09:22.601: INFO: namespace e2e-tests-container-probe-5vfqr deletion completed in 6.291789363s

• [SLOW TEST:73.803 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:09:22.601: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 28 12:09:22.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-lxfjj'
Dec 28 12:09:22.881: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 28 12:09:22.881: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Dec 28 12:09:26.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-lxfjj'
Dec 28 12:09:27.227: INFO: stderr: ""
Dec 28 12:09:27.228: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:09:27.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-lxfjj" for this suite.
Dec 28 12:09:35.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:09:35.377: INFO: namespace: e2e-tests-kubectl-lxfjj, resource: bindings, ignored listing per whitelist
Dec 28 12:09:35.451: INFO: namespace e2e-tests-kubectl-lxfjj deletion completed in 8.208561977s

• [SLOW TEST:12.849 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:09:35.451: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Dec 28 12:09:35.574: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:09:35.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-vrh76" for this suite.
Dec 28 12:09:41.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:09:41.995: INFO: namespace: e2e-tests-kubectl-vrh76, resource: bindings, ignored listing per whitelist
Dec 28 12:09:42.037: INFO: namespace e2e-tests-kubectl-vrh76 deletion completed in 6.319400256s

• [SLOW TEST:6.586 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:09:42.037: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Dec 28 12:09:42.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-p8lnx'
Dec 28 12:09:42.766: INFO: stderr: ""
Dec 28 12:09:42.766: INFO: stdout: "pod/pause created\n"
Dec 28 12:09:42.766: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Dec 28 12:09:42.766: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-p8lnx" to be "running and ready"
Dec 28 12:09:42.902: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 135.622199ms
Dec 28 12:09:44.928: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.162263575s
Dec 28 12:09:46.944: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.177973956s
Dec 28 12:09:50.370: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 7.603732569s
Dec 28 12:09:52.383: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 9.616981161s
Dec 28 12:09:54.396: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 11.629732781s
Dec 28 12:09:54.396: INFO: Pod "pause" satisfied condition "running and ready"
Dec 28 12:09:54.396: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Dec 28 12:09:54.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-p8lnx'
Dec 28 12:09:54.634: INFO: stderr: ""
Dec 28 12:09:54.634: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Dec 28 12:09:54.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-p8lnx'
Dec 28 12:09:54.760: INFO: stderr: ""
Dec 28 12:09:54.760: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          12s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Dec 28 12:09:54.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-p8lnx'
Dec 28 12:09:54.922: INFO: stderr: ""
Dec 28 12:09:54.922: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Dec 28 12:09:54.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-p8lnx'
Dec 28 12:09:55.128: INFO: stderr: ""
Dec 28 12:09:55.128: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          13s   \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Dec 28 12:09:55.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-p8lnx'
Dec 28 12:09:55.309: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 28 12:09:55.309: INFO: stdout: "pod \"pause\" force deleted\n"
Dec 28 12:09:55.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-p8lnx'
Dec 28 12:09:55.450: INFO: stderr: "No resources found.\n"
Dec 28 12:09:55.450: INFO: stdout: ""
Dec 28 12:09:55.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-p8lnx -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 28 12:09:55.547: INFO: stderr: ""
Dec 28 12:09:55.547: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:09:55.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-p8lnx" for this suite.
Dec 28 12:10:01.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:10:02.573: INFO: namespace: e2e-tests-kubectl-p8lnx, resource: bindings, ignored listing per whitelist
Dec 28 12:10:02.611: INFO: namespace e2e-tests-kubectl-p8lnx deletion completed in 7.050420452s

• [SLOW TEST:20.574 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:10:02.611: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 28 12:10:12.879: INFO: Waiting up to 5m0s for pod "client-envvars-001951a6-296b-11ea-8e71-0242ac110005" in namespace "e2e-tests-pods-4zhtz" to be "success or failure"
Dec 28 12:10:12.901: INFO: Pod "client-envvars-001951a6-296b-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.977706ms
Dec 28 12:10:15.029: INFO: Pod "client-envvars-001951a6-296b-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.150150516s
Dec 28 12:10:17.066: INFO: Pod "client-envvars-001951a6-296b-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.186940369s
Dec 28 12:10:19.388: INFO: Pod "client-envvars-001951a6-296b-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.509330719s
Dec 28 12:10:21.397: INFO: Pod "client-envvars-001951a6-296b-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.518018132s
Dec 28 12:10:23.413: INFO: Pod "client-envvars-001951a6-296b-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.533801784s
STEP: Saw pod success
Dec 28 12:10:23.413: INFO: Pod "client-envvars-001951a6-296b-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 12:10:23.418: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-001951a6-296b-11ea-8e71-0242ac110005 container env3cont: 
STEP: delete the pod
Dec 28 12:10:24.706: INFO: Waiting for pod client-envvars-001951a6-296b-11ea-8e71-0242ac110005 to disappear
Dec 28 12:10:25.128: INFO: Pod client-envvars-001951a6-296b-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:10:25.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-4zhtz" for this suite.
Dec 28 12:11:15.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:11:15.270: INFO: namespace: e2e-tests-pods-4zhtz, resource: bindings, ignored listing per whitelist
Dec 28 12:11:15.464: INFO: namespace e2e-tests-pods-4zhtz deletion completed in 50.307901503s

• [SLOW TEST:72.853 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:11:15.464: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-258d4f2d-296b-11ea-8e71-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 28 12:11:15.665: INFO: Waiting up to 5m0s for pod "pod-secrets-258e6b51-296b-11ea-8e71-0242ac110005" in namespace "e2e-tests-secrets-sm2jn" to be "success or failure"
Dec 28 12:11:15.675: INFO: Pod "pod-secrets-258e6b51-296b-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.260417ms
Dec 28 12:11:17.695: INFO: Pod "pod-secrets-258e6b51-296b-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029405752s
Dec 28 12:11:19.709: INFO: Pod "pod-secrets-258e6b51-296b-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044054537s
Dec 28 12:11:22.087: INFO: Pod "pod-secrets-258e6b51-296b-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.421581943s
Dec 28 12:11:24.118: INFO: Pod "pod-secrets-258e6b51-296b-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.452816349s
Dec 28 12:11:26.140: INFO: Pod "pod-secrets-258e6b51-296b-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.474596492s
Dec 28 12:11:28.224: INFO: Pod "pod-secrets-258e6b51-296b-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.558332097s
STEP: Saw pod success
Dec 28 12:11:28.224: INFO: Pod "pod-secrets-258e6b51-296b-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 12:11:28.232: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-258e6b51-296b-11ea-8e71-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 28 12:11:28.647: INFO: Waiting for pod pod-secrets-258e6b51-296b-11ea-8e71-0242ac110005 to disappear
Dec 28 12:11:28.692: INFO: Pod pod-secrets-258e6b51-296b-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:11:28.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-sm2jn" for this suite.
Dec 28 12:11:34.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:11:34.793: INFO: namespace: e2e-tests-secrets-sm2jn, resource: bindings, ignored listing per whitelist
Dec 28 12:11:34.979: INFO: namespace e2e-tests-secrets-sm2jn deletion completed in 6.276732724s

• [SLOW TEST:19.515 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:11:34.979: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-3137a30d-296b-11ea-8e71-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-3137a386-296b-11ea-8e71-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-3137a30d-296b-11ea-8e71-0242ac110005
STEP: Updating configmap cm-test-opt-upd-3137a386-296b-11ea-8e71-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-3137a3b0-296b-11ea-8e71-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:13:25.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-6mfcl" for this suite.
Dec 28 12:13:49.505: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:13:49.571: INFO: namespace: e2e-tests-configmap-6mfcl, resource: bindings, ignored listing per whitelist
Dec 28 12:13:49.844: INFO: namespace e2e-tests-configmap-6mfcl deletion completed in 24.494077399s

• [SLOW TEST:134.865 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:13:49.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 28 12:13:50.143: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:13:51.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-ps9hk" for this suite.
Dec 28 12:13:57.450: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:13:57.620: INFO: namespace: e2e-tests-custom-resource-definition-ps9hk, resource: bindings, ignored listing per whitelist
Dec 28 12:13:57.634: INFO: namespace e2e-tests-custom-resource-definition-ps9hk deletion completed in 6.389305454s

• [SLOW TEST:7.789 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:13:57.634: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Dec 28 12:13:58.041: INFO: Waiting up to 5m0s for pod "client-containers-864a11f3-296b-11ea-8e71-0242ac110005" in namespace "e2e-tests-containers-czwqc" to be "success or failure"
Dec 28 12:13:58.051: INFO: Pod "client-containers-864a11f3-296b-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.667245ms
Dec 28 12:14:00.205: INFO: Pod "client-containers-864a11f3-296b-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.16352767s
Dec 28 12:14:02.253: INFO: Pod "client-containers-864a11f3-296b-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.211493669s
Dec 28 12:14:04.349: INFO: Pod "client-containers-864a11f3-296b-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.307597863s
Dec 28 12:14:06.904: INFO: Pod "client-containers-864a11f3-296b-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.863045609s
Dec 28 12:14:08.931: INFO: Pod "client-containers-864a11f3-296b-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.889732164s
Dec 28 12:14:10.961: INFO: Pod "client-containers-864a11f3-296b-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.919578939s
STEP: Saw pod success
Dec 28 12:14:10.961: INFO: Pod "client-containers-864a11f3-296b-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 12:14:10.969: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-864a11f3-296b-11ea-8e71-0242ac110005 container test-container: 
STEP: delete the pod
Dec 28 12:14:11.106: INFO: Waiting for pod client-containers-864a11f3-296b-11ea-8e71-0242ac110005 to disappear
Dec 28 12:14:11.208: INFO: Pod client-containers-864a11f3-296b-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:14:11.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-czwqc" for this suite.
Dec 28 12:14:17.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:14:17.506: INFO: namespace: e2e-tests-containers-czwqc, resource: bindings, ignored listing per whitelist
Dec 28 12:14:17.533: INFO: namespace e2e-tests-containers-czwqc deletion completed in 6.316436356s

• [SLOW TEST:19.899 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:14:17.533: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Dec 28 12:14:17.867: INFO: Number of nodes with available pods: 0
Dec 28 12:14:17.867: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 12:14:18.910: INFO: Number of nodes with available pods: 0
Dec 28 12:14:18.910: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 12:14:19.893: INFO: Number of nodes with available pods: 0
Dec 28 12:14:19.893: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 12:14:20.891: INFO: Number of nodes with available pods: 0
Dec 28 12:14:20.891: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 12:14:21.895: INFO: Number of nodes with available pods: 0
Dec 28 12:14:21.895: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 12:14:22.918: INFO: Number of nodes with available pods: 0
Dec 28 12:14:22.918: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 12:14:24.325: INFO: Number of nodes with available pods: 0
Dec 28 12:14:24.325: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 12:14:24.923: INFO: Number of nodes with available pods: 0
Dec 28 12:14:24.923: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 12:14:25.932: INFO: Number of nodes with available pods: 0
Dec 28 12:14:25.932: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 12:14:26.895: INFO: Number of nodes with available pods: 0
Dec 28 12:14:26.895: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 12:14:27.911: INFO: Number of nodes with available pods: 1
Dec 28 12:14:27.911: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Dec 28 12:14:28.023: INFO: Number of nodes with available pods: 1
Dec 28 12:14:28.023: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-vn8q2, will wait for the garbage collector to delete the pods
Dec 28 12:14:29.148: INFO: Deleting DaemonSet.extensions daemon-set took: 22.532247ms
Dec 28 12:14:31.348: INFO: Terminating DaemonSet.extensions daemon-set pods took: 2.200241123s
Dec 28 12:14:34.878: INFO: Number of nodes with available pods: 0
Dec 28 12:14:34.878: INFO: Number of running nodes: 0, number of available pods: 0
Dec 28 12:14:34.884: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-vn8q2/daemonsets","resourceVersion":"16346488"},"items":null}

Dec 28 12:14:34.892: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-vn8q2/pods","resourceVersion":"16346489"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:14:34.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-vn8q2" for this suite.
Dec 28 12:14:40.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:14:41.077: INFO: namespace: e2e-tests-daemonsets-vn8q2, resource: bindings, ignored listing per whitelist
Dec 28 12:14:41.151: INFO: namespace e2e-tests-daemonsets-vn8q2 deletion completed in 6.247700403s

• [SLOW TEST:23.618 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:14:41.152: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-a024331b-296b-11ea-8e71-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 28 12:14:41.344: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a0250762-296b-11ea-8e71-0242ac110005" in namespace "e2e-tests-projected-hdmtd" to be "success or failure"
Dec 28 12:14:41.360: INFO: Pod "pod-projected-secrets-a0250762-296b-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.782371ms
Dec 28 12:14:43.659: INFO: Pod "pod-projected-secrets-a0250762-296b-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.314489602s
Dec 28 12:14:45.670: INFO: Pod "pod-projected-secrets-a0250762-296b-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.325602167s
Dec 28 12:14:47.691: INFO: Pod "pod-projected-secrets-a0250762-296b-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.346913248s
Dec 28 12:14:49.704: INFO: Pod "pod-projected-secrets-a0250762-296b-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.359073652s
Dec 28 12:14:51.742: INFO: Pod "pod-projected-secrets-a0250762-296b-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.39763239s
STEP: Saw pod success
Dec 28 12:14:51.742: INFO: Pod "pod-projected-secrets-a0250762-296b-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 12:14:51.747: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-a0250762-296b-11ea-8e71-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Dec 28 12:14:52.726: INFO: Waiting for pod pod-projected-secrets-a0250762-296b-11ea-8e71-0242ac110005 to disappear
Dec 28 12:14:53.085: INFO: Pod pod-projected-secrets-a0250762-296b-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:14:53.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-hdmtd" for this suite.
Dec 28 12:14:59.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:14:59.488: INFO: namespace: e2e-tests-projected-hdmtd, resource: bindings, ignored listing per whitelist
Dec 28 12:14:59.504: INFO: namespace e2e-tests-projected-hdmtd deletion completed in 6.399769064s

• [SLOW TEST:18.352 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:14:59.504: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-projected-all-test-volume-ab1c56ba-296b-11ea-8e71-0242ac110005
STEP: Creating secret with name secret-projected-all-test-volume-ab1c56a6-296b-11ea-8e71-0242ac110005
STEP: Creating a pod to test Check all projections for projected volume plugin
Dec 28 12:14:59.861: INFO: Waiting up to 5m0s for pod "projected-volume-ab1c5663-296b-11ea-8e71-0242ac110005" in namespace "e2e-tests-projected-2xzb5" to be "success or failure"
Dec 28 12:14:59.888: INFO: Pod "projected-volume-ab1c5663-296b-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 26.750243ms
Dec 28 12:15:01.910: INFO: Pod "projected-volume-ab1c5663-296b-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048980639s
Dec 28 12:15:03.956: INFO: Pod "projected-volume-ab1c5663-296b-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094495372s
Dec 28 12:15:06.457: INFO: Pod "projected-volume-ab1c5663-296b-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.595928474s
Dec 28 12:15:08.892: INFO: Pod "projected-volume-ab1c5663-296b-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.030979191s
Dec 28 12:15:10.904: INFO: Pod "projected-volume-ab1c5663-296b-11ea-8e71-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 11.042348181s
Dec 28 12:15:12.946: INFO: Pod "projected-volume-ab1c5663-296b-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.084128849s
STEP: Saw pod success
Dec 28 12:15:12.946: INFO: Pod "projected-volume-ab1c5663-296b-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 12:15:12.959: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-ab1c5663-296b-11ea-8e71-0242ac110005 container projected-all-volume-test: 
STEP: delete the pod
Dec 28 12:15:13.181: INFO: Waiting for pod projected-volume-ab1c5663-296b-11ea-8e71-0242ac110005 to disappear
Dec 28 12:15:13.195: INFO: Pod projected-volume-ab1c5663-296b-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:15:13.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-2xzb5" for this suite.
Dec 28 12:15:19.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:15:19.420: INFO: namespace: e2e-tests-projected-2xzb5, resource: bindings, ignored listing per whitelist
Dec 28 12:15:19.733: INFO: namespace e2e-tests-projected-2xzb5 deletion completed in 6.528818439s

• [SLOW TEST:20.229 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:15:19.733: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-b72bac2e-296b-11ea-8e71-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 28 12:15:19.989: INFO: Waiting up to 5m0s for pod "pod-configmaps-b72ce628-296b-11ea-8e71-0242ac110005" in namespace "e2e-tests-configmap-p6pbp" to be "success or failure"
Dec 28 12:15:20.066: INFO: Pod "pod-configmaps-b72ce628-296b-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 76.747737ms
Dec 28 12:15:22.078: INFO: Pod "pod-configmaps-b72ce628-296b-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089406245s
Dec 28 12:15:24.143: INFO: Pod "pod-configmaps-b72ce628-296b-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.153805276s
Dec 28 12:15:26.208: INFO: Pod "pod-configmaps-b72ce628-296b-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.219628536s
Dec 28 12:15:28.224: INFO: Pod "pod-configmaps-b72ce628-296b-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.235046894s
Dec 28 12:15:30.241: INFO: Pod "pod-configmaps-b72ce628-296b-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.252241285s
Dec 28 12:15:32.459: INFO: Pod "pod-configmaps-b72ce628-296b-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.470407046s
STEP: Saw pod success
Dec 28 12:15:32.460: INFO: Pod "pod-configmaps-b72ce628-296b-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 12:15:32.511: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-b72ce628-296b-11ea-8e71-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 28 12:15:32.717: INFO: Waiting for pod pod-configmaps-b72ce628-296b-11ea-8e71-0242ac110005 to disappear
Dec 28 12:15:32.790: INFO: Pod pod-configmaps-b72ce628-296b-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:15:32.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-p6pbp" for this suite.
Dec 28 12:15:38.863: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:15:38.926: INFO: namespace: e2e-tests-configmap-p6pbp, resource: bindings, ignored listing per whitelist
Dec 28 12:15:39.016: INFO: namespace e2e-tests-configmap-p6pbp deletion completed in 6.217877808s

• [SLOW TEST:19.282 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:15:39.016: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Dec 28 12:15:39.433: INFO: PodSpec: initContainers in spec.initContainers
Dec 28 12:16:48.756: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-c2c7d1c9-296b-11ea-8e71-0242ac110005", GenerateName:"", Namespace:"e2e-tests-init-container-fmw97", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-fmw97/pods/pod-init-c2c7d1c9-296b-11ea-8e71-0242ac110005", UID:"c2ca7ab6-296b-11ea-a994-fa163e34d433", ResourceVersion:"16346769", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63713132139, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"433630065"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-72s9c", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002085b40), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-72s9c", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-72s9c", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-72s9c", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0015d0c38), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001d698c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0015d0dc0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0015d0de0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0015d0de8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0015d0dec)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713132139, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713132139, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713132139, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713132139, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc0012d7340), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0017ef7a0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0017ef810)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://60e0e4766456e13fd309f81e1e4687ee1bfc63cbf42b04784b53dffb1d54a705"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0012d7380), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0012d7360), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:16:48.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-fmw97" for this suite.
Dec 28 12:17:12.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:17:13.078: INFO: namespace: e2e-tests-init-container-fmw97, resource: bindings, ignored listing per whitelist
Dec 28 12:17:13.095: INFO: namespace e2e-tests-init-container-fmw97 deletion completed in 24.271276467s

• [SLOW TEST:94.079 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:17:13.095: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-fac39dda-296b-11ea-8e71-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 28 12:17:13.404: INFO: Waiting up to 5m0s for pod "pod-secrets-fac54c1e-296b-11ea-8e71-0242ac110005" in namespace "e2e-tests-secrets-64n7c" to be "success or failure"
Dec 28 12:17:13.418: INFO: Pod "pod-secrets-fac54c1e-296b-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.597496ms
Dec 28 12:17:15.683: INFO: Pod "pod-secrets-fac54c1e-296b-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.279377727s
Dec 28 12:17:17.715: INFO: Pod "pod-secrets-fac54c1e-296b-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.310432439s
Dec 28 12:17:20.075: INFO: Pod "pod-secrets-fac54c1e-296b-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.671211065s
Dec 28 12:17:22.091: INFO: Pod "pod-secrets-fac54c1e-296b-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.686473236s
Dec 28 12:17:24.144: INFO: Pod "pod-secrets-fac54c1e-296b-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.740376363s
STEP: Saw pod success
Dec 28 12:17:24.145: INFO: Pod "pod-secrets-fac54c1e-296b-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 12:17:24.155: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-fac54c1e-296b-11ea-8e71-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 28 12:17:24.248: INFO: Waiting for pod pod-secrets-fac54c1e-296b-11ea-8e71-0242ac110005 to disappear
Dec 28 12:17:24.320: INFO: Pod pod-secrets-fac54c1e-296b-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:17:24.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-64n7c" for this suite.
Dec 28 12:17:30.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:17:30.772: INFO: namespace: e2e-tests-secrets-64n7c, resource: bindings, ignored listing per whitelist
Dec 28 12:17:30.776: INFO: namespace e2e-tests-secrets-64n7c deletion completed in 6.446292385s

• [SLOW TEST:17.680 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:17:30.776: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-054d1e5f-296c-11ea-8e71-0242ac110005
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-054d1e5f-296c-11ea-8e71-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:18:59.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-bsqtn" for this suite.
Dec 28 12:19:39.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:19:39.726: INFO: namespace: e2e-tests-projected-bsqtn, resource: bindings, ignored listing per whitelist
Dec 28 12:19:39.959: INFO: namespace e2e-tests-projected-bsqtn deletion completed in 40.445435966s

• [SLOW TEST:129.184 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:19:39.960: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Dec 28 12:19:41.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Dec 28 12:19:41.311: INFO: stderr: ""
Dec 28 12:19:41.311: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:19:41.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-stn6g" for this suite.
Dec 28 12:19:47.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:19:47.679: INFO: namespace: e2e-tests-kubectl-stn6g, resource: bindings, ignored listing per whitelist
Dec 28 12:19:47.738: INFO: namespace e2e-tests-kubectl-stn6g deletion completed in 6.412945902s

• [SLOW TEST:7.778 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:19:47.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:19:58.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-cvmqj" for this suite.
Dec 28 12:20:04.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:20:04.717: INFO: namespace: e2e-tests-emptydir-wrapper-cvmqj, resource: bindings, ignored listing per whitelist
Dec 28 12:20:04.738: INFO: namespace e2e-tests-emptydir-wrapper-cvmqj deletion completed in 6.411796305s

• [SLOW TEST:16.999 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:20:04.738: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:20:14.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-b27pz" for this suite.
Dec 28 12:21:09.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:21:09.124: INFO: namespace: e2e-tests-kubelet-test-b27pz, resource: bindings, ignored listing per whitelist
Dec 28 12:21:09.172: INFO: namespace e2e-tests-kubelet-test-b27pz deletion completed in 54.168878265s

• [SLOW TEST:64.433 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:21:09.172: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-6mnt7/configmap-test-8765cf55-296c-11ea-8e71-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 28 12:21:09.319: INFO: Waiting up to 5m0s for pod "pod-configmaps-8766dd57-296c-11ea-8e71-0242ac110005" in namespace "e2e-tests-configmap-6mnt7" to be "success or failure"
Dec 28 12:21:09.327: INFO: Pod "pod-configmaps-8766dd57-296c-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.251179ms
Dec 28 12:21:11.628: INFO: Pod "pod-configmaps-8766dd57-296c-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.309573145s
Dec 28 12:21:13.662: INFO: Pod "pod-configmaps-8766dd57-296c-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.343119944s
Dec 28 12:21:16.034: INFO: Pod "pod-configmaps-8766dd57-296c-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.715497601s
Dec 28 12:21:18.056: INFO: Pod "pod-configmaps-8766dd57-296c-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.737356193s
Dec 28 12:21:20.226: INFO: Pod "pod-configmaps-8766dd57-296c-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.907177591s
Dec 28 12:21:22.240: INFO: Pod "pod-configmaps-8766dd57-296c-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.921665702s
STEP: Saw pod success
Dec 28 12:21:22.240: INFO: Pod "pod-configmaps-8766dd57-296c-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 12:21:22.247: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-8766dd57-296c-11ea-8e71-0242ac110005 container env-test: 
STEP: delete the pod
Dec 28 12:21:22.561: INFO: Waiting for pod pod-configmaps-8766dd57-296c-11ea-8e71-0242ac110005 to disappear
Dec 28 12:21:22.606: INFO: Pod pod-configmaps-8766dd57-296c-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:21:22.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-6mnt7" for this suite.
Dec 28 12:21:28.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:21:28.790: INFO: namespace: e2e-tests-configmap-6mnt7, resource: bindings, ignored listing per whitelist
Dec 28 12:21:28.810: INFO: namespace e2e-tests-configmap-6mnt7 deletion completed in 6.145634368s

• [SLOW TEST:19.638 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:21:28.810: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-7hq87
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-7hq87
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-7hq87
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-7hq87
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-7hq87
Dec 28 12:21:41.247: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-7hq87, name: ss-0, uid: 958fb729-296c-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete.
Dec 28 12:21:42.689: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-7hq87, name: ss-0, uid: 958fb729-296c-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Dec 28 12:21:42.780: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-7hq87, name: ss-0, uid: 958fb729-296c-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Dec 28 12:21:42.833: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-7hq87
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-7hq87
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-7hq87 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 28 12:21:57.178: INFO: Deleting all statefulset in ns e2e-tests-statefulset-7hq87
Dec 28 12:21:57.205: INFO: Scaling statefulset ss to 0
Dec 28 12:22:17.282: INFO: Waiting for statefulset status.replicas updated to 0
Dec 28 12:22:17.291: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:22:17.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-7hq87" for this suite.
Dec 28 12:22:25.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:22:25.529: INFO: namespace: e2e-tests-statefulset-7hq87, resource: bindings, ignored listing per whitelist
Dec 28 12:22:25.717: INFO: namespace e2e-tests-statefulset-7hq87 deletion completed in 8.366480901s

• [SLOW TEST:56.908 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:22:25.718: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-7twqp
I1228 12:22:26.023658       8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-7twqp, replica count: 1
I1228 12:22:27.074287       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1228 12:22:28.074521       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1228 12:22:29.074767       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1228 12:22:30.075063       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1228 12:22:31.075280       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1228 12:22:32.075550       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1228 12:22:33.075858       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1228 12:22:34.076197       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1228 12:22:35.076705       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1228 12:22:36.077026       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1228 12:22:37.077300       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Dec 28 12:22:37.353: INFO: Created: latency-svc-k6wzf
Dec 28 12:22:37.366: INFO: Got endpoints: latency-svc-k6wzf [188.981881ms]
Dec 28 12:22:37.557: INFO: Created: latency-svc-fzb6s
Dec 28 12:22:37.593: INFO: Got endpoints: latency-svc-fzb6s [226.208084ms]
Dec 28 12:22:37.738: INFO: Created: latency-svc-cz22z
Dec 28 12:22:37.817: INFO: Created: latency-svc-66vkg
Dec 28 12:22:37.821: INFO: Got endpoints: latency-svc-cz22z [453.993396ms]
Dec 28 12:22:38.011: INFO: Got endpoints: latency-svc-66vkg [644.538608ms]
Dec 28 12:22:38.027: INFO: Created: latency-svc-dbhs7
Dec 28 12:22:38.054: INFO: Got endpoints: latency-svc-dbhs7 [687.381036ms]
Dec 28 12:22:38.336: INFO: Created: latency-svc-mk4j6
Dec 28 12:22:38.364: INFO: Got endpoints: latency-svc-mk4j6 [997.243182ms]
Dec 28 12:22:38.418: INFO: Created: latency-svc-wxcv6
Dec 28 12:22:38.580: INFO: Got endpoints: latency-svc-wxcv6 [1.212756992s]
Dec 28 12:22:38.613: INFO: Created: latency-svc-7t5qm
Dec 28 12:22:38.646: INFO: Got endpoints: latency-svc-7t5qm [1.279418562s]
Dec 28 12:22:38.807: INFO: Created: latency-svc-mhlmw
Dec 28 12:22:38.859: INFO: Created: latency-svc-vd5fl
Dec 28 12:22:38.861: INFO: Got endpoints: latency-svc-mhlmw [1.493727367s]
Dec 28 12:22:39.036: INFO: Got endpoints: latency-svc-vd5fl [1.669038149s]
Dec 28 12:22:39.090: INFO: Created: latency-svc-hwxsl
Dec 28 12:22:39.099: INFO: Got endpoints: latency-svc-hwxsl [1.731543056s]
Dec 28 12:22:39.240: INFO: Created: latency-svc-5x6nv
Dec 28 12:22:39.267: INFO: Got endpoints: latency-svc-5x6nv [1.900792332s]
Dec 28 12:22:39.468: INFO: Created: latency-svc-lqz2k
Dec 28 12:22:39.518: INFO: Created: latency-svc-chzpv
Dec 28 12:22:39.538: INFO: Got endpoints: latency-svc-lqz2k [2.17042566s]
Dec 28 12:22:39.678: INFO: Got endpoints: latency-svc-chzpv [2.311192324s]
Dec 28 12:22:39.719: INFO: Created: latency-svc-p8d6z
Dec 28 12:22:39.768: INFO: Got endpoints: latency-svc-p8d6z [2.40117676s]
Dec 28 12:22:39.988: INFO: Created: latency-svc-m5wbg
Dec 28 12:22:39.995: INFO: Got endpoints: latency-svc-m5wbg [2.627959162s]
Dec 28 12:22:40.061: INFO: Created: latency-svc-5684m
Dec 28 12:22:40.063: INFO: Got endpoints: latency-svc-5684m [2.469725979s]
Dec 28 12:22:40.212: INFO: Created: latency-svc-2fqsg
Dec 28 12:22:40.235: INFO: Got endpoints: latency-svc-2fqsg [2.413434292s]
Dec 28 12:22:40.440: INFO: Created: latency-svc-s8k8m
Dec 28 12:22:40.507: INFO: Got endpoints: latency-svc-s8k8m [2.495044372s]
Dec 28 12:22:40.535: INFO: Created: latency-svc-bzwvb
Dec 28 12:22:40.590: INFO: Got endpoints: latency-svc-bzwvb [2.535931291s]
Dec 28 12:22:40.683: INFO: Created: latency-svc-rj45z
Dec 28 12:22:40.809: INFO: Got endpoints: latency-svc-rj45z [2.445233849s]
Dec 28 12:22:40.851: INFO: Created: latency-svc-r6xk6
Dec 28 12:22:40.877: INFO: Got endpoints: latency-svc-r6xk6 [2.297508347s]
Dec 28 12:22:41.051: INFO: Created: latency-svc-lvkrg
Dec 28 12:22:41.079: INFO: Got endpoints: latency-svc-lvkrg [2.432060128s]
Dec 28 12:22:41.113: INFO: Created: latency-svc-nfd8c
Dec 28 12:22:41.137: INFO: Got endpoints: latency-svc-nfd8c [2.275920469s]
Dec 28 12:22:41.384: INFO: Created: latency-svc-j2rvr
Dec 28 12:22:41.592: INFO: Got endpoints: latency-svc-j2rvr [2.555611921s]
Dec 28 12:22:41.619: INFO: Created: latency-svc-58gtk
Dec 28 12:22:41.646: INFO: Got endpoints: latency-svc-58gtk [2.546843107s]
Dec 28 12:22:41.841: INFO: Created: latency-svc-rndbz
Dec 28 12:22:41.915: INFO: Got endpoints: latency-svc-rndbz [2.647847291s]
Dec 28 12:22:42.116: INFO: Created: latency-svc-ws6tf
Dec 28 12:22:42.140: INFO: Got endpoints: latency-svc-ws6tf [2.602297566s]
Dec 28 12:22:42.207: INFO: Created: latency-svc-x94q7
Dec 28 12:22:42.347: INFO: Got endpoints: latency-svc-x94q7 [2.668366531s]
Dec 28 12:22:42.372: INFO: Created: latency-svc-fz67t
Dec 28 12:22:42.405: INFO: Got endpoints: latency-svc-fz67t [2.636440093s]
Dec 28 12:22:42.571: INFO: Created: latency-svc-slj6m
Dec 28 12:22:42.611: INFO: Got endpoints: latency-svc-slj6m [2.615624038s]
Dec 28 12:22:42.753: INFO: Created: latency-svc-9bq2d
Dec 28 12:22:42.794: INFO: Got endpoints: latency-svc-9bq2d [2.731371471s]
Dec 28 12:22:42.970: INFO: Created: latency-svc-nqdd6
Dec 28 12:22:43.024: INFO: Got endpoints: latency-svc-nqdd6 [2.789332678s]
Dec 28 12:22:43.228: INFO: Created: latency-svc-xgxgn
Dec 28 12:22:43.390: INFO: Got endpoints: latency-svc-xgxgn [2.883125045s]
Dec 28 12:22:43.405: INFO: Created: latency-svc-hqw2m
Dec 28 12:22:43.414: INFO: Got endpoints: latency-svc-hqw2m [2.824284104s]
Dec 28 12:22:43.478: INFO: Created: latency-svc-2c5s4
Dec 28 12:22:43.662: INFO: Got endpoints: latency-svc-2c5s4 [2.852604461s]
Dec 28 12:22:43.699: INFO: Created: latency-svc-wpkt2
Dec 28 12:22:43.771: INFO: Got endpoints: latency-svc-wpkt2 [2.893691873s]
Dec 28 12:22:44.070: INFO: Created: latency-svc-wvpkt
Dec 28 12:22:44.324: INFO: Got endpoints: latency-svc-wvpkt [3.245306328s]
Dec 28 12:22:44.349: INFO: Created: latency-svc-s8849
Dec 28 12:22:44.376: INFO: Got endpoints: latency-svc-s8849 [3.239196336s]
Dec 28 12:22:44.654: INFO: Created: latency-svc-qnghv
Dec 28 12:22:44.682: INFO: Got endpoints: latency-svc-qnghv [3.09025087s]
Dec 28 12:22:44.943: INFO: Created: latency-svc-85wcq
Dec 28 12:22:44.971: INFO: Got endpoints: latency-svc-85wcq [3.324844364s]
Dec 28 12:22:45.161: INFO: Created: latency-svc-z8ksz
Dec 28 12:22:45.161: INFO: Got endpoints: latency-svc-z8ksz [3.24532657s]
Dec 28 12:22:45.212: INFO: Created: latency-svc-vw4wh
Dec 28 12:22:45.352: INFO: Got endpoints: latency-svc-vw4wh [3.212034858s]
Dec 28 12:22:45.410: INFO: Created: latency-svc-4zr7g
Dec 28 12:22:45.599: INFO: Got endpoints: latency-svc-4zr7g [3.252001459s]
Dec 28 12:22:45.608: INFO: Created: latency-svc-lf2cm
Dec 28 12:22:45.617: INFO: Got endpoints: latency-svc-lf2cm [3.211650379s]
Dec 28 12:22:45.694: INFO: Created: latency-svc-jgtcr
Dec 28 12:22:45.824: INFO: Got endpoints: latency-svc-jgtcr [3.212658823s]
Dec 28 12:22:45.844: INFO: Created: latency-svc-rm4ww
Dec 28 12:22:45.865: INFO: Got endpoints: latency-svc-rm4ww [3.070818509s]
Dec 28 12:22:46.055: INFO: Created: latency-svc-wz7tb
Dec 28 12:22:46.065: INFO: Got endpoints: latency-svc-wz7tb [3.040427822s]
Dec 28 12:22:46.319: INFO: Created: latency-svc-7j6j2
Dec 28 12:22:46.329: INFO: Got endpoints: latency-svc-7j6j2 [2.939321144s]
Dec 28 12:22:46.555: INFO: Created: latency-svc-bmx5w
Dec 28 12:22:46.573: INFO: Got endpoints: latency-svc-bmx5w [3.15872571s]
Dec 28 12:22:46.723: INFO: Created: latency-svc-6fwxc
Dec 28 12:22:46.733: INFO: Got endpoints: latency-svc-6fwxc [3.070978522s]
Dec 28 12:22:46.902: INFO: Created: latency-svc-8gqw8
Dec 28 12:22:46.912: INFO: Got endpoints: latency-svc-8gqw8 [3.14061735s]
Dec 28 12:22:47.093: INFO: Created: latency-svc-g5tpc
Dec 28 12:22:47.109: INFO: Got endpoints: latency-svc-g5tpc [2.784591254s]
Dec 28 12:22:47.291: INFO: Created: latency-svc-f4kdt
Dec 28 12:22:47.318: INFO: Got endpoints: latency-svc-f4kdt [2.941696336s]
Dec 28 12:22:47.541: INFO: Created: latency-svc-s59rt
Dec 28 12:22:47.715: INFO: Created: latency-svc-29hgs
Dec 28 12:22:47.723: INFO: Got endpoints: latency-svc-s59rt [3.040402213s]
Dec 28 12:22:47.745: INFO: Got endpoints: latency-svc-29hgs [2.774132686s]
Dec 28 12:22:47.958: INFO: Created: latency-svc-d4fqt
Dec 28 12:22:48.187: INFO: Got endpoints: latency-svc-d4fqt [3.026050862s]
Dec 28 12:22:48.225: INFO: Created: latency-svc-m62g7
Dec 28 12:22:48.398: INFO: Got endpoints: latency-svc-m62g7 [3.046112229s]
Dec 28 12:22:48.468: INFO: Created: latency-svc-2wcnj
Dec 28 12:22:48.627: INFO: Got endpoints: latency-svc-2wcnj [3.028106091s]
Dec 28 12:22:48.660: INFO: Created: latency-svc-vchb2
Dec 28 12:22:48.687: INFO: Got endpoints: latency-svc-vchb2 [3.070033455s]
Dec 28 12:22:48.843: INFO: Created: latency-svc-lms2k
Dec 28 12:22:48.882: INFO: Got endpoints: latency-svc-lms2k [3.058024133s]
Dec 28 12:22:48.947: INFO: Created: latency-svc-jbt7m
Dec 28 12:22:49.082: INFO: Got endpoints: latency-svc-jbt7m [3.217062928s]
Dec 28 12:22:49.149: INFO: Created: latency-svc-w5ww4
Dec 28 12:22:49.294: INFO: Got endpoints: latency-svc-w5ww4 [3.228609804s]
Dec 28 12:22:49.296: INFO: Created: latency-svc-bgfrt
Dec 28 12:22:49.333: INFO: Got endpoints: latency-svc-bgfrt [3.003596842s]
Dec 28 12:22:49.544: INFO: Created: latency-svc-b68bg
Dec 28 12:22:49.579: INFO: Got endpoints: latency-svc-b68bg [3.006072869s]
Dec 28 12:22:49.800: INFO: Created: latency-svc-zphlv
Dec 28 12:22:49.808: INFO: Got endpoints: latency-svc-zphlv [3.074833962s]
Dec 28 12:22:50.020: INFO: Created: latency-svc-mk9tr
Dec 28 12:22:50.036: INFO: Got endpoints: latency-svc-mk9tr [3.123730592s]
Dec 28 12:22:50.379: INFO: Created: latency-svc-drdnk
Dec 28 12:22:50.392: INFO: Got endpoints: latency-svc-drdnk [3.283328332s]
Dec 28 12:22:50.620: INFO: Created: latency-svc-4nxsf
Dec 28 12:22:50.660: INFO: Got endpoints: latency-svc-4nxsf [3.341940741s]
Dec 28 12:22:50.687: INFO: Created: latency-svc-xxgnd
Dec 28 12:22:50.802: INFO: Got endpoints: latency-svc-xxgnd [3.079058035s]
Dec 28 12:22:50.826: INFO: Created: latency-svc-h5n8v
Dec 28 12:22:50.851: INFO: Got endpoints: latency-svc-h5n8v [3.105621532s]
Dec 28 12:22:50.981: INFO: Created: latency-svc-zrgbv
Dec 28 12:22:51.005: INFO: Got endpoints: latency-svc-zrgbv [2.818207311s]
Dec 28 12:22:51.070: INFO: Created: latency-svc-gxfd7
Dec 28 12:22:51.228: INFO: Got endpoints: latency-svc-gxfd7 [2.829612717s]
Dec 28 12:22:51.266: INFO: Created: latency-svc-f6mmd
Dec 28 12:22:51.266: INFO: Got endpoints: latency-svc-f6mmd [2.638781155s]
Dec 28 12:22:51.426: INFO: Created: latency-svc-qpvnt
Dec 28 12:22:51.438: INFO: Got endpoints: latency-svc-qpvnt [2.750851212s]
Dec 28 12:22:51.620: INFO: Created: latency-svc-krgd9
Dec 28 12:22:51.631: INFO: Got endpoints: latency-svc-krgd9 [2.748948762s]
Dec 28 12:22:51.816: INFO: Created: latency-svc-fw2vs
Dec 28 12:22:51.816: INFO: Got endpoints: latency-svc-fw2vs [2.733655387s]
Dec 28 12:22:51.884: INFO: Created: latency-svc-hkjcb
Dec 28 12:22:51.992: INFO: Created: latency-svc-tbnc8
Dec 28 12:22:52.009: INFO: Got endpoints: latency-svc-hkjcb [2.715485936s]
Dec 28 12:22:52.013: INFO: Got endpoints: latency-svc-tbnc8 [2.680000089s]
Dec 28 12:22:52.193: INFO: Created: latency-svc-rsr8f
Dec 28 12:22:52.225: INFO: Got endpoints: latency-svc-rsr8f [2.645735285s]
Dec 28 12:22:52.356: INFO: Created: latency-svc-nq6v4
Dec 28 12:22:52.433: INFO: Got endpoints: latency-svc-nq6v4 [2.624547156s]
Dec 28 12:22:52.596: INFO: Created: latency-svc-v5bhl
Dec 28 12:22:52.610: INFO: Got endpoints: latency-svc-v5bhl [2.574197637s]
Dec 28 12:22:52.794: INFO: Created: latency-svc-w4kxj
Dec 28 12:22:52.910: INFO: Got endpoints: latency-svc-w4kxj [2.518225339s]
Dec 28 12:22:52.962: INFO: Created: latency-svc-5t8kf
Dec 28 12:22:53.092: INFO: Created: latency-svc-zpnnf
Dec 28 12:22:53.120: INFO: Got endpoints: latency-svc-5t8kf [2.459711575s]
Dec 28 12:22:53.414: INFO: Got endpoints: latency-svc-zpnnf [2.611708683s]
Dec 28 12:22:53.421: INFO: Created: latency-svc-zbfp9
Dec 28 12:22:53.432: INFO: Got endpoints: latency-svc-zbfp9 [2.580951973s]
Dec 28 12:22:53.498: INFO: Created: latency-svc-kwck2
Dec 28 12:22:53.554: INFO: Got endpoints: latency-svc-kwck2 [2.548403913s]
Dec 28 12:22:53.580: INFO: Created: latency-svc-4zfkg
Dec 28 12:22:53.643: INFO: Got endpoints: latency-svc-4zfkg [2.414722571s]
Dec 28 12:22:53.773: INFO: Created: latency-svc-s5b22
Dec 28 12:22:53.803: INFO: Got endpoints: latency-svc-s5b22 [2.536264835s]
Dec 28 12:22:53.911: INFO: Created: latency-svc-4r4jh
Dec 28 12:22:54.268: INFO: Got endpoints: latency-svc-4r4jh [2.830454955s]
Dec 28 12:22:55.121: INFO: Created: latency-svc-jw866
Dec 28 12:22:55.328: INFO: Got endpoints: latency-svc-jw866 [3.696686536s]
Dec 28 12:22:55.514: INFO: Created: latency-svc-f8c4k
Dec 28 12:22:55.534: INFO: Got endpoints: latency-svc-f8c4k [3.717831135s]
Dec 28 12:22:55.584: INFO: Created: latency-svc-c9n9k
Dec 28 12:22:55.593: INFO: Got endpoints: latency-svc-c9n9k [3.580406484s]
Dec 28 12:22:55.709: INFO: Created: latency-svc-ct2pm
Dec 28 12:22:55.713: INFO: Got endpoints: latency-svc-ct2pm [3.70328608s]
Dec 28 12:22:55.786: INFO: Created: latency-svc-qqdh9
Dec 28 12:22:55.979: INFO: Got endpoints: latency-svc-qqdh9 [3.753617413s]
Dec 28 12:22:56.025: INFO: Created: latency-svc-fgq6w
Dec 28 12:22:56.052: INFO: Got endpoints: latency-svc-fgq6w [3.618827893s]
Dec 28 12:22:56.277: INFO: Created: latency-svc-r2k52
Dec 28 12:22:56.322: INFO: Got endpoints: latency-svc-r2k52 [3.711388217s]
Dec 28 12:22:56.546: INFO: Created: latency-svc-xrfbk
Dec 28 12:22:56.568: INFO: Got endpoints: latency-svc-xrfbk [3.657153984s]
Dec 28 12:22:56.681: INFO: Created: latency-svc-vw54d
Dec 28 12:22:56.704: INFO: Got endpoints: latency-svc-vw54d [3.583917923s]
Dec 28 12:22:56.761: INFO: Created: latency-svc-xvznh
Dec 28 12:22:56.761: INFO: Got endpoints: latency-svc-xvznh [3.347008204s]
Dec 28 12:22:56.917: INFO: Created: latency-svc-cftvw
Dec 28 12:22:56.953: INFO: Created: latency-svc-s4n2v
Dec 28 12:22:56.957: INFO: Got endpoints: latency-svc-cftvw [3.52467573s]
Dec 28 12:22:56.968: INFO: Got endpoints: latency-svc-s4n2v [3.413571757s]
Dec 28 12:22:57.099: INFO: Created: latency-svc-g76mt
Dec 28 12:22:57.112: INFO: Got endpoints: latency-svc-g76mt [3.468548505s]
Dec 28 12:22:57.297: INFO: Created: latency-svc-grqqs
Dec 28 12:22:57.327: INFO: Got endpoints: latency-svc-grqqs [3.524319664s]
Dec 28 12:22:57.379: INFO: Created: latency-svc-7v2qh
Dec 28 12:22:57.501: INFO: Got endpoints: latency-svc-7v2qh [3.232149386s]
Dec 28 12:22:57.515: INFO: Created: latency-svc-r9bfz
Dec 28 12:22:57.531: INFO: Got endpoints: latency-svc-r9bfz [2.202627636s]
Dec 28 12:22:57.582: INFO: Created: latency-svc-q9fxj
Dec 28 12:22:57.699: INFO: Got endpoints: latency-svc-q9fxj [2.164796688s]
Dec 28 12:22:57.735: INFO: Created: latency-svc-xhprg
Dec 28 12:22:57.783: INFO: Created: latency-svc-6wrxc
Dec 28 12:22:57.810: INFO: Got endpoints: latency-svc-6wrxc [2.097446104s]
Dec 28 12:22:57.998: INFO: Got endpoints: latency-svc-xhprg [2.404934144s]
Dec 28 12:22:58.042: INFO: Created: latency-svc-rw75w
Dec 28 12:22:58.071: INFO: Got endpoints: latency-svc-rw75w [2.091598555s]
Dec 28 12:22:58.201: INFO: Created: latency-svc-vh9hd
Dec 28 12:22:58.221: INFO: Got endpoints: latency-svc-vh9hd [2.168876162s]
Dec 28 12:22:58.473: INFO: Created: latency-svc-62bsq
Dec 28 12:22:58.496: INFO: Got endpoints: latency-svc-62bsq [2.174472336s]
Dec 28 12:22:58.654: INFO: Created: latency-svc-js2gm
Dec 28 12:22:58.682: INFO: Got endpoints: latency-svc-js2gm [2.114492067s]
Dec 28 12:22:58.757: INFO: Created: latency-svc-tgxxk
Dec 28 12:22:58.879: INFO: Got endpoints: latency-svc-tgxxk [2.174612195s]
Dec 28 12:22:58.987: INFO: Created: latency-svc-l6ns4
Dec 28 12:22:59.126: INFO: Got endpoints: latency-svc-l6ns4 [2.365210154s]
Dec 28 12:22:59.152: INFO: Created: latency-svc-vbt5x
Dec 28 12:22:59.185: INFO: Got endpoints: latency-svc-vbt5x [2.228238777s]
Dec 28 12:22:59.333: INFO: Created: latency-svc-xtlzj
Dec 28 12:22:59.340: INFO: Got endpoints: latency-svc-xtlzj [2.371591344s]
Dec 28 12:22:59.557: INFO: Created: latency-svc-ghxhj
Dec 28 12:22:59.559: INFO: Got endpoints: latency-svc-ghxhj [2.446832336s]
Dec 28 12:22:59.620: INFO: Created: latency-svc-r5f5c
Dec 28 12:22:59.626: INFO: Got endpoints: latency-svc-r5f5c [2.298498241s]
Dec 28 12:22:59.761: INFO: Created: latency-svc-k9xd4
Dec 28 12:22:59.810: INFO: Got endpoints: latency-svc-k9xd4 [2.309067991s]
Dec 28 12:23:00.012: INFO: Created: latency-svc-qgr2j
Dec 28 12:23:00.030: INFO: Got endpoints: latency-svc-qgr2j [2.499429171s]
Dec 28 12:23:00.193: INFO: Created: latency-svc-xnc6r
Dec 28 12:23:00.210: INFO: Got endpoints: latency-svc-xnc6r [2.511014243s]
Dec 28 12:23:00.470: INFO: Created: latency-svc-nr2d5
Dec 28 12:23:00.492: INFO: Got endpoints: latency-svc-nr2d5 [2.681433487s]
Dec 28 12:23:00.611: INFO: Created: latency-svc-xs2bv
Dec 28 12:23:00.637: INFO: Got endpoints: latency-svc-xs2bv [2.638381261s]
Dec 28 12:23:00.817: INFO: Created: latency-svc-j4lnm
Dec 28 12:23:00.839: INFO: Created: latency-svc-rrbwn
Dec 28 12:23:00.894: INFO: Got endpoints: latency-svc-rrbwn [2.673555988s]
Dec 28 12:23:00.895: INFO: Got endpoints: latency-svc-j4lnm [2.823557141s]
Dec 28 12:23:00.987: INFO: Created: latency-svc-ndcs8
Dec 28 12:23:01.005: INFO: Got endpoints: latency-svc-ndcs8 [2.508631941s]
Dec 28 12:23:01.073: INFO: Created: latency-svc-xjngz
Dec 28 12:23:01.218: INFO: Got endpoints: latency-svc-xjngz [2.535184015s]
Dec 28 12:23:01.267: INFO: Created: latency-svc-zt45x
Dec 28 12:23:01.278: INFO: Got endpoints: latency-svc-zt45x [2.399630854s]
Dec 28 12:23:01.570: INFO: Created: latency-svc-5s5pj
Dec 28 12:23:01.724: INFO: Got endpoints: latency-svc-5s5pj [2.597976314s]
Dec 28 12:23:01.752: INFO: Created: latency-svc-44nf7
Dec 28 12:23:01.786: INFO: Got endpoints: latency-svc-44nf7 [2.600888188s]
Dec 28 12:23:01.893: INFO: Created: latency-svc-6vp5d
Dec 28 12:23:01.927: INFO: Got endpoints: latency-svc-6vp5d [2.587821908s]
Dec 28 12:23:01.968: INFO: Created: latency-svc-9z4vs
Dec 28 12:23:02.091: INFO: Got endpoints: latency-svc-9z4vs [2.532561551s]
Dec 28 12:23:02.136: INFO: Created: latency-svc-sgp5r
Dec 28 12:23:02.250: INFO: Got endpoints: latency-svc-sgp5r [2.624158441s]
Dec 28 12:23:02.285: INFO: Created: latency-svc-xkj9b
Dec 28 12:23:02.327: INFO: Got endpoints: latency-svc-xkj9b [2.517401603s]
Dec 28 12:23:02.481: INFO: Created: latency-svc-vthhk
Dec 28 12:23:02.497: INFO: Got endpoints: latency-svc-vthhk [246.701387ms]
Dec 28 12:23:02.556: INFO: Created: latency-svc-jwnlc
Dec 28 12:23:02.736: INFO: Got endpoints: latency-svc-jwnlc [2.705410131s]
Dec 28 12:23:02.878: INFO: Created: latency-svc-h2z9z
Dec 28 12:23:02.896: INFO: Created: latency-svc-8zjz8
Dec 28 12:23:02.936: INFO: Created: latency-svc-2zdwp
Dec 28 12:23:02.939: INFO: Got endpoints: latency-svc-8zjz8 [2.446692193s]
Dec 28 12:23:02.939: INFO: Got endpoints: latency-svc-h2z9z [2.728470423s]
Dec 28 12:23:02.945: INFO: Got endpoints: latency-svc-2zdwp [2.307485444s]
Dec 28 12:23:03.108: INFO: Created: latency-svc-w9wb8
Dec 28 12:23:03.108: INFO: Got endpoints: latency-svc-w9wb8 [2.213531774s]
Dec 28 12:23:03.161: INFO: Created: latency-svc-mzscx
Dec 28 12:23:03.397: INFO: Got endpoints: latency-svc-mzscx [2.502732813s]
Dec 28 12:23:03.437: INFO: Created: latency-svc-szjzj
Dec 28 12:23:03.438: INFO: Got endpoints: latency-svc-szjzj [2.432783531s]
Dec 28 12:23:03.485: INFO: Created: latency-svc-jwlx5
Dec 28 12:23:03.492: INFO: Got endpoints: latency-svc-jwlx5 [2.274173495s]
Dec 28 12:23:03.646: INFO: Created: latency-svc-xpjgb
Dec 28 12:23:03.649: INFO: Got endpoints: latency-svc-xpjgb [2.370691352s]
Dec 28 12:23:03.810: INFO: Created: latency-svc-pj5kt
Dec 28 12:23:03.825: INFO: Got endpoints: latency-svc-pj5kt [2.100724385s]
Dec 28 12:23:04.008: INFO: Created: latency-svc-fdjm7
Dec 28 12:23:04.046: INFO: Got endpoints: latency-svc-fdjm7 [2.259527212s]
Dec 28 12:23:04.243: INFO: Created: latency-svc-2nhw5
Dec 28 12:23:04.341: INFO: Got endpoints: latency-svc-2nhw5 [2.413560401s]
Dec 28 12:23:04.530: INFO: Created: latency-svc-gfsmb
Dec 28 12:23:04.624: INFO: Got endpoints: latency-svc-gfsmb [2.532431194s]
Dec 28 12:23:04.645: INFO: Created: latency-svc-qcmhl
Dec 28 12:23:04.655: INFO: Got endpoints: latency-svc-qcmhl [2.32810286s]
Dec 28 12:23:04.690: INFO: Created: latency-svc-zqmts
Dec 28 12:23:04.713: INFO: Got endpoints: latency-svc-zqmts [2.216365982s]
Dec 28 12:23:05.820: INFO: Created: latency-svc-9twhc
Dec 28 12:23:05.840: INFO: Got endpoints: latency-svc-9twhc [3.103915733s]
Dec 28 12:23:06.198: INFO: Created: latency-svc-w2vxp
Dec 28 12:23:06.230: INFO: Got endpoints: latency-svc-w2vxp [3.290793841s]
Dec 28 12:23:06.429: INFO: Created: latency-svc-w5wkm
Dec 28 12:23:06.449: INFO: Got endpoints: latency-svc-w5wkm [3.510464841s]
Dec 28 12:23:06.627: INFO: Created: latency-svc-px5r2
Dec 28 12:23:06.632: INFO: Got endpoints: latency-svc-px5r2 [3.687477064s]
Dec 28 12:23:06.851: INFO: Created: latency-svc-lqtqn
Dec 28 12:23:07.003: INFO: Got endpoints: latency-svc-lqtqn [3.894789217s]
Dec 28 12:23:07.024: INFO: Created: latency-svc-6f7nf
Dec 28 12:23:07.041: INFO: Got endpoints: latency-svc-6f7nf [3.643783721s]
Dec 28 12:23:07.073: INFO: Created: latency-svc-tkz86
Dec 28 12:23:07.082: INFO: Got endpoints: latency-svc-tkz86 [3.644446044s]
Dec 28 12:23:07.245: INFO: Created: latency-svc-pqvw6
Dec 28 12:23:07.246: INFO: Got endpoints: latency-svc-pqvw6 [3.753415586s]
Dec 28 12:23:07.445: INFO: Created: latency-svc-vtnjv
Dec 28 12:23:07.445: INFO: Got endpoints: latency-svc-vtnjv [3.795998654s]
Dec 28 12:23:07.602: INFO: Created: latency-svc-w85lw
Dec 28 12:23:07.609: INFO: Got endpoints: latency-svc-w85lw [3.783820847s]
Dec 28 12:23:07.679: INFO: Created: latency-svc-vrrqq
Dec 28 12:23:07.805: INFO: Got endpoints: latency-svc-vrrqq [3.758841847s]
Dec 28 12:23:07.814: INFO: Created: latency-svc-hllnw
Dec 28 12:23:07.881: INFO: Created: latency-svc-hp5hs
Dec 28 12:23:07.893: INFO: Got endpoints: latency-svc-hllnw [3.55149529s]
Dec 28 12:23:08.046: INFO: Got endpoints: latency-svc-hp5hs [3.422635173s]
Dec 28 12:23:08.077: INFO: Created: latency-svc-z26gg
Dec 28 12:23:08.105: INFO: Got endpoints: latency-svc-z26gg [3.449736587s]
Dec 28 12:23:08.286: INFO: Created: latency-svc-k4972
Dec 28 12:23:08.360: INFO: Got endpoints: latency-svc-k4972 [3.646723398s]
Dec 28 12:23:08.371: INFO: Created: latency-svc-qffjz
Dec 28 12:23:08.535: INFO: Got endpoints: latency-svc-qffjz [2.694602617s]
Dec 28 12:23:08.614: INFO: Created: latency-svc-2q7n4
Dec 28 12:23:08.693: INFO: Got endpoints: latency-svc-2q7n4 [2.463373187s]
Dec 28 12:23:08.705: INFO: Created: latency-svc-rrhlz
Dec 28 12:23:08.721: INFO: Got endpoints: latency-svc-rrhlz [2.271743102s]
Dec 28 12:23:08.776: INFO: Created: latency-svc-f859z
Dec 28 12:23:08.844: INFO: Got endpoints: latency-svc-f859z [2.211579203s]
Dec 28 12:23:08.884: INFO: Created: latency-svc-5t94r
Dec 28 12:23:08.907: INFO: Got endpoints: latency-svc-5t94r [1.903551131s]
Dec 28 12:23:09.054: INFO: Created: latency-svc-mgj2z
Dec 28 12:23:09.099: INFO: Got endpoints: latency-svc-mgj2z [2.057368549s]
Dec 28 12:23:09.102: INFO: Created: latency-svc-jwnj9
Dec 28 12:23:09.163: INFO: Got endpoints: latency-svc-jwnj9 [2.080153911s]
Dec 28 12:23:09.192: INFO: Created: latency-svc-j2df8
Dec 28 12:23:09.213: INFO: Got endpoints: latency-svc-j2df8 [1.967453014s]
Dec 28 12:23:09.259: INFO: Created: latency-svc-v78hn
Dec 28 12:23:09.396: INFO: Got endpoints: latency-svc-v78hn [1.950727724s]
Dec 28 12:23:09.413: INFO: Created: latency-svc-xkhzd
Dec 28 12:23:09.428: INFO: Got endpoints: latency-svc-xkhzd [1.819257507s]
Dec 28 12:23:09.636: INFO: Created: latency-svc-98gq6
Dec 28 12:23:09.670: INFO: Got endpoints: latency-svc-98gq6 [1.864519578s]
Dec 28 12:23:09.719: INFO: Created: latency-svc-chbd7
Dec 28 12:23:09.820: INFO: Got endpoints: latency-svc-chbd7 [1.927334827s]
Dec 28 12:23:09.854: INFO: Created: latency-svc-s2wfh
Dec 28 12:23:09.880: INFO: Got endpoints: latency-svc-s2wfh [1.833421975s]
Dec 28 12:23:10.106: INFO: Created: latency-svc-4qp4g
Dec 28 12:23:10.135: INFO: Got endpoints: latency-svc-4qp4g [2.029641485s]
Dec 28 12:23:10.160: INFO: Created: latency-svc-h64h8
Dec 28 12:23:10.269: INFO: Got endpoints: latency-svc-h64h8 [1.909065589s]
Dec 28 12:23:10.281: INFO: Created: latency-svc-zflcz
Dec 28 12:23:10.319: INFO: Got endpoints: latency-svc-zflcz [1.78387592s]
Dec 28 12:23:10.549: INFO: Created: latency-svc-49qhs
Dec 28 12:23:10.549: INFO: Got endpoints: latency-svc-49qhs [1.855699241s]
Dec 28 12:23:10.726: INFO: Created: latency-svc-9tt9f
Dec 28 12:23:10.776: INFO: Got endpoints: latency-svc-9tt9f [2.054823945s]
Dec 28 12:23:10.781: INFO: Created: latency-svc-8jkv7
Dec 28 12:23:10.802: INFO: Got endpoints: latency-svc-8jkv7 [1.957912644s]
Dec 28 12:23:10.879: INFO: Created: latency-svc-jqrkh
Dec 28 12:23:10.889: INFO: Got endpoints: latency-svc-jqrkh [1.982097718s]
Dec 28 12:23:10.936: INFO: Created: latency-svc-2tcw2
Dec 28 12:23:10.951: INFO: Got endpoints: latency-svc-2tcw2 [1.851628906s]
Dec 28 12:23:11.158: INFO: Created: latency-svc-fn85x
Dec 28 12:23:11.169: INFO: Got endpoints: latency-svc-fn85x [2.00682754s]
Dec 28 12:23:11.236: INFO: Created: latency-svc-5tgpc
Dec 28 12:23:11.347: INFO: Got endpoints: latency-svc-5tgpc [2.134168335s]
Dec 28 12:23:11.395: INFO: Created: latency-svc-dngxc
Dec 28 12:23:11.416: INFO: Got endpoints: latency-svc-dngxc [2.019487952s]
Dec 28 12:23:11.567: INFO: Created: latency-svc-p27m8
Dec 28 12:23:11.578: INFO: Got endpoints: latency-svc-p27m8 [2.149051753s]
Dec 28 12:23:11.633: INFO: Created: latency-svc-njjkt
Dec 28 12:23:11.648: INFO: Got endpoints: latency-svc-njjkt [1.978291576s]
Dec 28 12:23:11.717: INFO: Created: latency-svc-gthc9
Dec 28 12:23:11.819: INFO: Got endpoints: latency-svc-gthc9 [1.998036955s]
Dec 28 12:23:11.827: INFO: Created: latency-svc-xzdl4
Dec 28 12:23:11.925: INFO: Got endpoints: latency-svc-xzdl4 [2.044632518s]
Dec 28 12:23:12.269: INFO: Created: latency-svc-7567d
Dec 28 12:23:12.800: INFO: Got endpoints: latency-svc-7567d [2.664628901s]
Dec 28 12:23:12.972: INFO: Created: latency-svc-2r5nm
Dec 28 12:23:12.988: INFO: Got endpoints: latency-svc-2r5nm [2.717979375s]
Dec 28 12:23:13.066: INFO: Created: latency-svc-qv98d
Dec 28 12:23:13.111: INFO: Got endpoints: latency-svc-qv98d [2.791682889s]
Dec 28 12:23:13.187: INFO: Created: latency-svc-q4xf5
Dec 28 12:23:13.310: INFO: Got endpoints: latency-svc-q4xf5 [2.760778534s]
Dec 28 12:23:13.342: INFO: Created: latency-svc-jvxv9
Dec 28 12:23:13.379: INFO: Got endpoints: latency-svc-jvxv9 [2.603311669s]
Dec 28 12:23:13.405: INFO: Created: latency-svc-gsp8l
Dec 28 12:23:13.515: INFO: Got endpoints: latency-svc-gsp8l [2.713590399s]
Dec 28 12:23:13.541: INFO: Created: latency-svc-947nz
Dec 28 12:23:13.546: INFO: Got endpoints: latency-svc-947nz [2.657178986s]
Dec 28 12:23:13.546: INFO: Latencies: [226.208084ms 246.701387ms 453.993396ms 644.538608ms 687.381036ms 997.243182ms 1.212756992s 1.279418562s 1.493727367s 1.669038149s 1.731543056s 1.78387592s 1.819257507s 1.833421975s 1.851628906s 1.855699241s 1.864519578s 1.900792332s 1.903551131s 1.909065589s 1.927334827s 1.950727724s 1.957912644s 1.967453014s 1.978291576s 1.982097718s 1.998036955s 2.00682754s 2.019487952s 2.029641485s 2.044632518s 2.054823945s 2.057368549s 2.080153911s 2.091598555s 2.097446104s 2.100724385s 2.114492067s 2.134168335s 2.149051753s 2.164796688s 2.168876162s 2.17042566s 2.174472336s 2.174612195s 2.202627636s 2.211579203s 2.213531774s 2.216365982s 2.228238777s 2.259527212s 2.271743102s 2.274173495s 2.275920469s 2.297508347s 2.298498241s 2.307485444s 2.309067991s 2.311192324s 2.32810286s 2.365210154s 2.370691352s 2.371591344s 2.399630854s 2.40117676s 2.404934144s 2.413434292s 2.413560401s 2.414722571s 2.432060128s 2.432783531s 2.445233849s 2.446692193s 2.446832336s 2.459711575s 2.463373187s 2.469725979s 2.495044372s 2.499429171s 2.502732813s 2.508631941s 2.511014243s 2.517401603s 2.518225339s 2.532431194s 2.532561551s 2.535184015s 2.535931291s 2.536264835s 2.546843107s 2.548403913s 2.555611921s 2.574197637s 2.580951973s 2.587821908s 2.597976314s 2.600888188s 2.602297566s 2.603311669s 2.611708683s 2.615624038s 2.624158441s 2.624547156s 2.627959162s 2.636440093s 2.638381261s 2.638781155s 2.645735285s 2.647847291s 2.657178986s 2.664628901s 2.668366531s 2.673555988s 2.680000089s 2.681433487s 2.694602617s 2.705410131s 2.713590399s 2.715485936s 2.717979375s 2.728470423s 2.731371471s 2.733655387s 2.748948762s 2.750851212s 2.760778534s 2.774132686s 2.784591254s 2.789332678s 2.791682889s 2.818207311s 2.823557141s 2.824284104s 2.829612717s 2.830454955s 2.852604461s 2.883125045s 2.893691873s 2.939321144s 2.941696336s 3.003596842s 3.006072869s 3.026050862s 3.028106091s 3.040402213s 3.040427822s 3.046112229s 3.058024133s 3.070033455s 3.070818509s 3.070978522s 3.074833962s 3.079058035s 3.09025087s 3.103915733s 3.105621532s 3.123730592s 3.14061735s 3.15872571s 3.211650379s 3.212034858s 3.212658823s 3.217062928s 3.228609804s 3.232149386s 3.239196336s 3.245306328s 3.24532657s 3.252001459s 3.283328332s 3.290793841s 3.324844364s 3.341940741s 3.347008204s 3.413571757s 3.422635173s 3.449736587s 3.468548505s 3.510464841s 3.524319664s 3.52467573s 3.55149529s 3.580406484s 3.583917923s 3.618827893s 3.643783721s 3.644446044s 3.646723398s 3.657153984s 3.687477064s 3.696686536s 3.70328608s 3.711388217s 3.717831135s 3.753415586s 3.753617413s 3.758841847s 3.783820847s 3.795998654s 3.894789217s]
Dec 28 12:23:13.547: INFO: 50 %ile: 2.615624038s
Dec 28 12:23:13.547: INFO: 90 %ile: 3.52467573s
Dec 28 12:23:13.547: INFO: 99 %ile: 3.795998654s
Dec 28 12:23:13.547: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:23:13.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svc-latency-7twqp" for this suite.
Dec 28 12:24:05.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:24:05.720: INFO: namespace: e2e-tests-svc-latency-7twqp, resource: bindings, ignored listing per whitelist
Dec 28 12:24:05.756: INFO: namespace e2e-tests-svc-latency-7twqp deletion completed in 52.203263398s

• [SLOW TEST:100.039 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:24:05.757: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 28 12:24:05.997: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f0b56519-296c-11ea-8e71-0242ac110005" in namespace "e2e-tests-downward-api-qnsrg" to be "success or failure"
Dec 28 12:24:06.127: INFO: Pod "downwardapi-volume-f0b56519-296c-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 129.878077ms
Dec 28 12:24:08.146: INFO: Pod "downwardapi-volume-f0b56519-296c-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.148816155s
Dec 28 12:24:10.165: INFO: Pod "downwardapi-volume-f0b56519-296c-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.16815786s
Dec 28 12:24:12.184: INFO: Pod "downwardapi-volume-f0b56519-296c-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.186501166s
Dec 28 12:24:14.194: INFO: Pod "downwardapi-volume-f0b56519-296c-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.197031945s
Dec 28 12:24:16.447: INFO: Pod "downwardapi-volume-f0b56519-296c-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.449820071s
Dec 28 12:24:18.607: INFO: Pod "downwardapi-volume-f0b56519-296c-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.60945362s
STEP: Saw pod success
Dec 28 12:24:18.607: INFO: Pod "downwardapi-volume-f0b56519-296c-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 12:24:18.621: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-f0b56519-296c-11ea-8e71-0242ac110005 container client-container: 
STEP: delete the pod
Dec 28 12:24:19.313: INFO: Waiting for pod downwardapi-volume-f0b56519-296c-11ea-8e71-0242ac110005 to disappear
Dec 28 12:24:19.390: INFO: Pod downwardapi-volume-f0b56519-296c-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:24:19.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-qnsrg" for this suite.
Dec 28 12:24:25.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:24:25.548: INFO: namespace: e2e-tests-downward-api-qnsrg, resource: bindings, ignored listing per whitelist
Dec 28 12:24:25.653: INFO: namespace e2e-tests-downward-api-qnsrg deletion completed in 6.24791407s

• [SLOW TEST:19.896 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:24:25.653: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-8gtl
STEP: Creating a pod to test atomic-volume-subpath
Dec 28 12:24:25.919: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-8gtl" in namespace "e2e-tests-subpath-bbscm" to be "success or failure"
Dec 28 12:24:25.940: INFO: Pod "pod-subpath-test-configmap-8gtl": Phase="Pending", Reason="", readiness=false. Elapsed: 21.035595ms
Dec 28 12:24:28.018: INFO: Pod "pod-subpath-test-configmap-8gtl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098423182s
Dec 28 12:24:30.034: INFO: Pod "pod-subpath-test-configmap-8gtl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.114342139s
Dec 28 12:24:32.331: INFO: Pod "pod-subpath-test-configmap-8gtl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.411401757s
Dec 28 12:24:34.352: INFO: Pod "pod-subpath-test-configmap-8gtl": Phase="Pending", Reason="", readiness=false. Elapsed: 8.432866994s
Dec 28 12:24:36.374: INFO: Pod "pod-subpath-test-configmap-8gtl": Phase="Pending", Reason="", readiness=false. Elapsed: 10.454664163s
Dec 28 12:24:38.387: INFO: Pod "pod-subpath-test-configmap-8gtl": Phase="Pending", Reason="", readiness=false. Elapsed: 12.467370385s
Dec 28 12:24:40.428: INFO: Pod "pod-subpath-test-configmap-8gtl": Phase="Pending", Reason="", readiness=false. Elapsed: 14.509239915s
Dec 28 12:24:42.460: INFO: Pod "pod-subpath-test-configmap-8gtl": Phase="Pending", Reason="", readiness=false. Elapsed: 16.540810179s
Dec 28 12:24:44.498: INFO: Pod "pod-subpath-test-configmap-8gtl": Phase="Running", Reason="", readiness=false. Elapsed: 18.578268733s
Dec 28 12:24:46.523: INFO: Pod "pod-subpath-test-configmap-8gtl": Phase="Running", Reason="", readiness=false. Elapsed: 20.60400894s
Dec 28 12:24:48.557: INFO: Pod "pod-subpath-test-configmap-8gtl": Phase="Running", Reason="", readiness=false. Elapsed: 22.637387807s
Dec 28 12:24:50.596: INFO: Pod "pod-subpath-test-configmap-8gtl": Phase="Running", Reason="", readiness=false. Elapsed: 24.67679722s
Dec 28 12:24:52.612: INFO: Pod "pod-subpath-test-configmap-8gtl": Phase="Running", Reason="", readiness=false. Elapsed: 26.692277095s
Dec 28 12:24:54.739: INFO: Pod "pod-subpath-test-configmap-8gtl": Phase="Running", Reason="", readiness=false. Elapsed: 28.819405091s
Dec 28 12:24:56.815: INFO: Pod "pod-subpath-test-configmap-8gtl": Phase="Running", Reason="", readiness=false. Elapsed: 30.895940166s
Dec 28 12:24:58.850: INFO: Pod "pod-subpath-test-configmap-8gtl": Phase="Running", Reason="", readiness=false. Elapsed: 32.931200333s
Dec 28 12:25:00.873: INFO: Pod "pod-subpath-test-configmap-8gtl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.953742577s
STEP: Saw pod success
Dec 28 12:25:00.873: INFO: Pod "pod-subpath-test-configmap-8gtl" satisfied condition "success or failure"
Dec 28 12:25:00.884: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-8gtl container test-container-subpath-configmap-8gtl: 
STEP: delete the pod
Dec 28 12:25:01.018: INFO: Waiting for pod pod-subpath-test-configmap-8gtl to disappear
Dec 28 12:25:01.058: INFO: Pod pod-subpath-test-configmap-8gtl no longer exists
STEP: Deleting pod pod-subpath-test-configmap-8gtl
Dec 28 12:25:01.058: INFO: Deleting pod "pod-subpath-test-configmap-8gtl" in namespace "e2e-tests-subpath-bbscm"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:25:01.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-bbscm" for this suite.
Dec 28 12:25:09.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:25:09.246: INFO: namespace: e2e-tests-subpath-bbscm, resource: bindings, ignored listing per whitelist
Dec 28 12:25:09.275: INFO: namespace e2e-tests-subpath-bbscm deletion completed in 8.202744967s

• [SLOW TEST:43.622 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:25:09.275: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-169313ec-296d-11ea-8e71-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 28 12:25:09.547: INFO: Waiting up to 5m0s for pod "pod-secrets-1693fd00-296d-11ea-8e71-0242ac110005" in namespace "e2e-tests-secrets-f5zbk" to be "success or failure"
Dec 28 12:25:09.625: INFO: Pod "pod-secrets-1693fd00-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 78.108336ms
Dec 28 12:25:11.789: INFO: Pod "pod-secrets-1693fd00-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.241395893s
Dec 28 12:25:13.815: INFO: Pod "pod-secrets-1693fd00-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.26785321s
Dec 28 12:25:15.843: INFO: Pod "pod-secrets-1693fd00-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.295775353s
Dec 28 12:25:17.866: INFO: Pod "pod-secrets-1693fd00-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.318466574s
Dec 28 12:25:19.882: INFO: Pod "pod-secrets-1693fd00-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.334745083s
Dec 28 12:25:21.909: INFO: Pod "pod-secrets-1693fd00-296d-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.361615859s
STEP: Saw pod success
Dec 28 12:25:21.909: INFO: Pod "pod-secrets-1693fd00-296d-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 12:25:21.919: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-1693fd00-296d-11ea-8e71-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 28 12:25:22.062: INFO: Waiting for pod pod-secrets-1693fd00-296d-11ea-8e71-0242ac110005 to disappear
Dec 28 12:25:22.077: INFO: Pod pod-secrets-1693fd00-296d-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:25:22.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-f5zbk" for this suite.
Dec 28 12:25:28.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:25:28.247: INFO: namespace: e2e-tests-secrets-f5zbk, resource: bindings, ignored listing per whitelist
Dec 28 12:25:28.295: INFO: namespace e2e-tests-secrets-f5zbk deletion completed in 6.183792848s

• [SLOW TEST:19.020 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:25:28.296: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Creating an uninitialized pod in the namespace
Dec 28 12:25:38.707: INFO: error from create uninitialized namespace: 
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:26:07.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-45wbz" for this suite.
Dec 28 12:26:13.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:26:13.153: INFO: namespace: e2e-tests-namespaces-45wbz, resource: bindings, ignored listing per whitelist
Dec 28 12:26:13.259: INFO: namespace e2e-tests-namespaces-45wbz deletion completed in 6.193324202s
STEP: Destroying namespace "e2e-tests-nsdeletetest-9cdqh" for this suite.
Dec 28 12:26:13.262: INFO: Namespace e2e-tests-nsdeletetest-9cdqh was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-x5p9k" for this suite.
Dec 28 12:26:19.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:26:19.327: INFO: namespace: e2e-tests-nsdeletetest-x5p9k, resource: bindings, ignored listing per whitelist
Dec 28 12:26:19.489: INFO: namespace e2e-tests-nsdeletetest-x5p9k deletion completed in 6.226715467s

• [SLOW TEST:51.194 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:26:19.490: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
STEP: creating an rc
Dec 28 12:26:19.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-k8l88'
Dec 28 12:26:22.100: INFO: stderr: ""
Dec 28 12:26:22.100: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Waiting for Redis master to start.
Dec 28 12:26:24.256: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 12:26:24.256: INFO: Found 0 / 1
Dec 28 12:26:25.372: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 12:26:25.372: INFO: Found 0 / 1
Dec 28 12:26:26.118: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 12:26:26.118: INFO: Found 0 / 1
Dec 28 12:26:27.114: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 12:26:27.114: INFO: Found 0 / 1
Dec 28 12:26:28.112: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 12:26:28.112: INFO: Found 0 / 1
Dec 28 12:26:29.339: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 12:26:29.339: INFO: Found 0 / 1
Dec 28 12:26:30.114: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 12:26:30.114: INFO: Found 0 / 1
Dec 28 12:26:31.117: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 12:26:31.117: INFO: Found 0 / 1
Dec 28 12:26:32.115: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 12:26:32.116: INFO: Found 0 / 1
Dec 28 12:26:33.113: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 12:26:33.113: INFO: Found 1 / 1
Dec 28 12:26:33.113: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 28 12:26:33.119: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 12:26:33.119: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Dec 28 12:26:33.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-bvvnx redis-master --namespace=e2e-tests-kubectl-k8l88'
Dec 28 12:26:33.342: INFO: stderr: ""
Dec 28 12:26:33.342: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 28 Dec 12:26:31.138 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 28 Dec 12:26:31.138 # Server started, Redis version 3.2.12\n1:M 28 Dec 12:26:31.138 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 28 Dec 12:26:31.138 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Dec 28 12:26:33.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-bvvnx redis-master --namespace=e2e-tests-kubectl-k8l88 --tail=1'
Dec 28 12:26:33.577: INFO: stderr: ""
Dec 28 12:26:33.577: INFO: stdout: "1:M 28 Dec 12:26:31.138 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Dec 28 12:26:33.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-bvvnx redis-master --namespace=e2e-tests-kubectl-k8l88 --limit-bytes=1'
Dec 28 12:26:33.719: INFO: stderr: ""
Dec 28 12:26:33.719: INFO: stdout: " "
STEP: exposing timestamps
Dec 28 12:26:33.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-bvvnx redis-master --namespace=e2e-tests-kubectl-k8l88 --tail=1 --timestamps'
Dec 28 12:26:33.890: INFO: stderr: ""
Dec 28 12:26:33.890: INFO: stdout: "2019-12-28T12:26:31.139243795Z 1:M 28 Dec 12:26:31.138 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Dec 28 12:26:36.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-bvvnx redis-master --namespace=e2e-tests-kubectl-k8l88 --since=1s'
Dec 28 12:26:36.653: INFO: stderr: ""
Dec 28 12:26:36.653: INFO: stdout: ""
Dec 28 12:26:36.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-bvvnx redis-master --namespace=e2e-tests-kubectl-k8l88 --since=24h'
Dec 28 12:26:36.817: INFO: stderr: ""
Dec 28 12:26:36.817: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 28 Dec 12:26:31.138 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 28 Dec 12:26:31.138 # Server started, Redis version 3.2.12\n1:M 28 Dec 12:26:31.138 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 28 Dec 12:26:31.138 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140
STEP: using delete to clean up resources
Dec 28 12:26:36.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-k8l88'
Dec 28 12:26:36.918: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 28 12:26:36.918: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Dec 28 12:26:36.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-k8l88'
Dec 28 12:26:37.060: INFO: stderr: "No resources found.\n"
Dec 28 12:26:37.060: INFO: stdout: ""
Dec 28 12:26:37.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-k8l88 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 28 12:26:37.170: INFO: stderr: ""
Dec 28 12:26:37.170: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:26:37.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-k8l88" for this suite.
Dec 28 12:26:59.263: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:26:59.424: INFO: namespace: e2e-tests-kubectl-k8l88, resource: bindings, ignored listing per whitelist
Dec 28 12:26:59.488: INFO: namespace e2e-tests-kubectl-k8l88 deletion completed in 22.292082652s

• [SLOW TEST:39.998 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:26:59.488: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-58841847-296d-11ea-8e71-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 28 12:27:00.188: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-58859366-296d-11ea-8e71-0242ac110005" in namespace "e2e-tests-projected-mcmsv" to be "success or failure"
Dec 28 12:27:00.198: INFO: Pod "pod-projected-secrets-58859366-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.461239ms
Dec 28 12:27:02.213: INFO: Pod "pod-projected-secrets-58859366-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02425504s
Dec 28 12:27:04.225: INFO: Pod "pod-projected-secrets-58859366-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036937645s
Dec 28 12:27:06.745: INFO: Pod "pod-projected-secrets-58859366-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.556972204s
Dec 28 12:27:08.759: INFO: Pod "pod-projected-secrets-58859366-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.570192835s
Dec 28 12:27:10.783: INFO: Pod "pod-projected-secrets-58859366-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.594753183s
Dec 28 12:27:12.844: INFO: Pod "pod-projected-secrets-58859366-296d-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.65545929s
STEP: Saw pod success
Dec 28 12:27:12.844: INFO: Pod "pod-projected-secrets-58859366-296d-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 12:27:12.876: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-58859366-296d-11ea-8e71-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Dec 28 12:27:13.189: INFO: Waiting for pod pod-projected-secrets-58859366-296d-11ea-8e71-0242ac110005 to disappear
Dec 28 12:27:13.198: INFO: Pod pod-projected-secrets-58859366-296d-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:27:13.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-mcmsv" for this suite.
Dec 28 12:27:19.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:27:19.472: INFO: namespace: e2e-tests-projected-mcmsv, resource: bindings, ignored listing per whitelist
Dec 28 12:27:19.472: INFO: namespace e2e-tests-projected-mcmsv deletion completed in 6.269423539s

• [SLOW TEST:19.984 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:27:19.473: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Dec 28 12:27:19.902: INFO: Waiting up to 5m0s for pod "client-containers-642eb7b9-296d-11ea-8e71-0242ac110005" in namespace "e2e-tests-containers-tjcps" to be "success or failure"
Dec 28 12:27:19.921: INFO: Pod "client-containers-642eb7b9-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.099309ms
Dec 28 12:27:21.949: INFO: Pod "client-containers-642eb7b9-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046459982s
Dec 28 12:27:23.984: INFO: Pod "client-containers-642eb7b9-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081434865s
Dec 28 12:27:26.602: INFO: Pod "client-containers-642eb7b9-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.700291227s
Dec 28 12:27:28.633: INFO: Pod "client-containers-642eb7b9-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.730473995s
Dec 28 12:27:30.664: INFO: Pod "client-containers-642eb7b9-296d-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.761481884s
STEP: Saw pod success
Dec 28 12:27:30.664: INFO: Pod "client-containers-642eb7b9-296d-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 12:27:30.672: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-642eb7b9-296d-11ea-8e71-0242ac110005 container test-container: 
STEP: delete the pod
Dec 28 12:27:30.769: INFO: Waiting for pod client-containers-642eb7b9-296d-11ea-8e71-0242ac110005 to disappear
Dec 28 12:27:30.781: INFO: Pod client-containers-642eb7b9-296d-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:27:30.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-tjcps" for this suite.
Dec 28 12:27:36.820: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:27:36.981: INFO: namespace: e2e-tests-containers-tjcps, resource: bindings, ignored listing per whitelist
Dec 28 12:27:37.021: INFO: namespace e2e-tests-containers-tjcps deletion completed in 6.233495946s

• [SLOW TEST:17.548 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:27:37.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on tmpfs
Dec 28 12:27:37.282: INFO: Waiting up to 5m0s for pod "pod-6ea349e0-296d-11ea-8e71-0242ac110005" in namespace "e2e-tests-emptydir-dpjrh" to be "success or failure"
Dec 28 12:27:37.352: INFO: Pod "pod-6ea349e0-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 70.213818ms
Dec 28 12:27:39.374: INFO: Pod "pod-6ea349e0-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091927055s
Dec 28 12:27:41.394: INFO: Pod "pod-6ea349e0-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112679401s
Dec 28 12:27:43.860: INFO: Pod "pod-6ea349e0-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.57835905s
Dec 28 12:27:45.876: INFO: Pod "pod-6ea349e0-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.593848415s
Dec 28 12:27:47.895: INFO: Pod "pod-6ea349e0-296d-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.61314505s
STEP: Saw pod success
Dec 28 12:27:47.895: INFO: Pod "pod-6ea349e0-296d-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 12:27:47.908: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-6ea349e0-296d-11ea-8e71-0242ac110005 container test-container: 
STEP: delete the pod
Dec 28 12:27:48.273: INFO: Waiting for pod pod-6ea349e0-296d-11ea-8e71-0242ac110005 to disappear
Dec 28 12:27:48.304: INFO: Pod pod-6ea349e0-296d-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:27:48.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-dpjrh" for this suite.
Dec 28 12:27:55.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:27:55.845: INFO: namespace: e2e-tests-emptydir-dpjrh, resource: bindings, ignored listing per whitelist
Dec 28 12:27:55.912: INFO: namespace e2e-tests-emptydir-dpjrh deletion completed in 6.748164859s

• [SLOW TEST:18.891 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:27:55.912: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-79dc010b-296d-11ea-8e71-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 28 12:27:56.106: INFO: Waiting up to 5m0s for pod "pod-configmaps-79dd3fbe-296d-11ea-8e71-0242ac110005" in namespace "e2e-tests-configmap-79dhv" to be "success or failure"
Dec 28 12:27:56.163: INFO: Pod "pod-configmaps-79dd3fbe-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 56.889089ms
Dec 28 12:27:58.408: INFO: Pod "pod-configmaps-79dd3fbe-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.302349346s
Dec 28 12:28:00.421: INFO: Pod "pod-configmaps-79dd3fbe-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.31571199s
Dec 28 12:28:02.796: INFO: Pod "pod-configmaps-79dd3fbe-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.690560495s
Dec 28 12:28:04.805: INFO: Pod "pod-configmaps-79dd3fbe-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.699801917s
Dec 28 12:28:06.819: INFO: Pod "pod-configmaps-79dd3fbe-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.713538517s
Dec 28 12:28:08.839: INFO: Pod "pod-configmaps-79dd3fbe-296d-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.733125258s
STEP: Saw pod success
Dec 28 12:28:08.839: INFO: Pod "pod-configmaps-79dd3fbe-296d-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 12:28:08.846: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-79dd3fbe-296d-11ea-8e71-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 28 12:28:08.990: INFO: Waiting for pod pod-configmaps-79dd3fbe-296d-11ea-8e71-0242ac110005 to disappear
Dec 28 12:28:09.004: INFO: Pod pod-configmaps-79dd3fbe-296d-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:28:09.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-79dhv" for this suite.
Dec 28 12:28:15.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:28:15.323: INFO: namespace: e2e-tests-configmap-79dhv, resource: bindings, ignored listing per whitelist
Dec 28 12:28:15.414: INFO: namespace e2e-tests-configmap-79dhv deletion completed in 6.352276857s

• [SLOW TEST:19.502 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:28:15.414: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 28 12:28:15.672: INFO: Waiting up to 5m0s for pod "downwardapi-volume-858528bf-296d-11ea-8e71-0242ac110005" in namespace "e2e-tests-downward-api-cwb2q" to be "success or failure"
Dec 28 12:28:15.693: INFO: Pod "downwardapi-volume-858528bf-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.525953ms
Dec 28 12:28:17.717: INFO: Pod "downwardapi-volume-858528bf-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04488171s
Dec 28 12:28:19.775: INFO: Pod "downwardapi-volume-858528bf-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103076795s
Dec 28 12:28:22.326: INFO: Pod "downwardapi-volume-858528bf-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.653667547s
Dec 28 12:28:24.432: INFO: Pod "downwardapi-volume-858528bf-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.759292344s
Dec 28 12:28:26.455: INFO: Pod "downwardapi-volume-858528bf-296d-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.78305602s
STEP: Saw pod success
Dec 28 12:28:26.456: INFO: Pod "downwardapi-volume-858528bf-296d-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 12:28:26.468: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-858528bf-296d-11ea-8e71-0242ac110005 container client-container: 
STEP: delete the pod
Dec 28 12:28:26.737: INFO: Waiting for pod downwardapi-volume-858528bf-296d-11ea-8e71-0242ac110005 to disappear
Dec 28 12:28:26.762: INFO: Pod downwardapi-volume-858528bf-296d-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:28:26.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-cwb2q" for this suite.
Dec 28 12:28:32.912: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:28:33.054: INFO: namespace: e2e-tests-downward-api-cwb2q, resource: bindings, ignored listing per whitelist
Dec 28 12:28:33.089: INFO: namespace e2e-tests-downward-api-cwb2q deletion completed in 6.247557101s

• [SLOW TEST:17.675 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:28:33.090: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-901a6693-296d-11ea-8e71-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 28 12:28:33.436: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-901c6fe2-296d-11ea-8e71-0242ac110005" in namespace "e2e-tests-projected-hrnfh" to be "success or failure"
Dec 28 12:28:33.553: INFO: Pod "pod-projected-secrets-901c6fe2-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 117.414132ms
Dec 28 12:28:35.577: INFO: Pod "pod-projected-secrets-901c6fe2-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.140824184s
Dec 28 12:28:37.595: INFO: Pod "pod-projected-secrets-901c6fe2-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.159302804s
Dec 28 12:28:40.122: INFO: Pod "pod-projected-secrets-901c6fe2-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.686544643s
Dec 28 12:28:42.143: INFO: Pod "pod-projected-secrets-901c6fe2-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.707369184s
Dec 28 12:28:44.159: INFO: Pod "pod-projected-secrets-901c6fe2-296d-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.723577862s
STEP: Saw pod success
Dec 28 12:28:44.160: INFO: Pod "pod-projected-secrets-901c6fe2-296d-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 12:28:44.165: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-901c6fe2-296d-11ea-8e71-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Dec 28 12:28:45.318: INFO: Waiting for pod pod-projected-secrets-901c6fe2-296d-11ea-8e71-0242ac110005 to disappear
Dec 28 12:28:45.339: INFO: Pod pod-projected-secrets-901c6fe2-296d-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:28:45.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-hrnfh" for this suite.
Dec 28 12:28:51.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:28:51.657: INFO: namespace: e2e-tests-projected-hrnfh, resource: bindings, ignored listing per whitelist
Dec 28 12:28:51.710: INFO: namespace e2e-tests-projected-hrnfh deletion completed in 6.358233984s

• [SLOW TEST:18.620 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:28:51.710: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 28 12:28:51.902: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9b1ee344-296d-11ea-8e71-0242ac110005" in namespace "e2e-tests-projected-25252" to be "success or failure"
Dec 28 12:28:51.912: INFO: Pod "downwardapi-volume-9b1ee344-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.078071ms
Dec 28 12:28:53.955: INFO: Pod "downwardapi-volume-9b1ee344-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051989632s
Dec 28 12:28:55.977: INFO: Pod "downwardapi-volume-9b1ee344-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074473922s
Dec 28 12:28:57.996: INFO: Pod "downwardapi-volume-9b1ee344-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093005896s
Dec 28 12:29:00.522: INFO: Pod "downwardapi-volume-9b1ee344-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.619819502s
Dec 28 12:29:02.550: INFO: Pod "downwardapi-volume-9b1ee344-296d-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.647429896s
STEP: Saw pod success
Dec 28 12:29:02.550: INFO: Pod "downwardapi-volume-9b1ee344-296d-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 12:29:02.567: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-9b1ee344-296d-11ea-8e71-0242ac110005 container client-container: 
STEP: delete the pod
Dec 28 12:29:02.781: INFO: Waiting for pod downwardapi-volume-9b1ee344-296d-11ea-8e71-0242ac110005 to disappear
Dec 28 12:29:02.790: INFO: Pod downwardapi-volume-9b1ee344-296d-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:29:02.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-25252" for this suite.
Dec 28 12:29:08.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:29:09.007: INFO: namespace: e2e-tests-projected-25252, resource: bindings, ignored listing per whitelist
Dec 28 12:29:09.122: INFO: namespace e2e-tests-projected-25252 deletion completed in 6.315365808s

• [SLOW TEST:17.412 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:29:09.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Dec 28 12:29:09.435: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-grhj6,SelfLink:/api/v1/namespaces/e2e-tests-watch-grhj6/configmaps/e2e-watch-test-resource-version,UID:a5864e76-296d-11ea-a994-fa163e34d433,ResourceVersion:16349664,Generation:0,CreationTimestamp:2019-12-28 12:29:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 28 12:29:09.435: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-grhj6,SelfLink:/api/v1/namespaces/e2e-tests-watch-grhj6/configmaps/e2e-watch-test-resource-version,UID:a5864e76-296d-11ea-a994-fa163e34d433,ResourceVersion:16349665,Generation:0,CreationTimestamp:2019-12-28 12:29:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:29:09.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-grhj6" for this suite.
Dec 28 12:29:15.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:29:15.659: INFO: namespace: e2e-tests-watch-grhj6, resource: bindings, ignored listing per whitelist
Dec 28 12:29:15.675: INFO: namespace e2e-tests-watch-grhj6 deletion completed in 6.213596096s

• [SLOW TEST:6.552 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:29:15.675: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W1228 12:29:57.825864       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 28 12:29:57.825: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:29:57.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-s7d2x" for this suite.
Dec 28 12:30:08.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:30:08.841: INFO: namespace: e2e-tests-gc-s7d2x, resource: bindings, ignored listing per whitelist
Dec 28 12:30:08.865: INFO: namespace e2e-tests-gc-s7d2x deletion completed in 11.034270098s

• [SLOW TEST:53.190 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:30:08.865: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:30:10.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-65l49" for this suite.
Dec 28 12:30:18.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:30:18.699: INFO: namespace: e2e-tests-kubelet-test-65l49, resource: bindings, ignored listing per whitelist
Dec 28 12:30:18.888: INFO: namespace e2e-tests-kubelet-test-65l49 deletion completed in 8.503813645s

• [SLOW TEST:10.024 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:30:18.889: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Dec 28 12:30:20.206: INFO: namespace e2e-tests-kubectl-8xn9t
Dec 28 12:30:20.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8xn9t'
Dec 28 12:30:21.129: INFO: stderr: ""
Dec 28 12:30:21.129: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 28 12:30:22.463: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 12:30:22.463: INFO: Found 0 / 1
Dec 28 12:30:23.719: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 12:30:23.719: INFO: Found 0 / 1
Dec 28 12:30:24.188: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 12:30:24.188: INFO: Found 0 / 1
Dec 28 12:30:25.732: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 12:30:25.732: INFO: Found 0 / 1
Dec 28 12:30:26.147: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 12:30:26.147: INFO: Found 0 / 1
Dec 28 12:30:27.168: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 12:30:27.168: INFO: Found 0 / 1
Dec 28 12:30:28.153: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 12:30:28.153: INFO: Found 0 / 1
Dec 28 12:30:30.113: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 12:30:30.113: INFO: Found 0 / 1
Dec 28 12:30:30.439: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 12:30:30.439: INFO: Found 0 / 1
Dec 28 12:30:31.143: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 12:30:31.143: INFO: Found 0 / 1
Dec 28 12:30:32.171: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 12:30:32.172: INFO: Found 0 / 1
Dec 28 12:30:33.144: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 12:30:33.144: INFO: Found 0 / 1
Dec 28 12:30:34.244: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 12:30:34.244: INFO: Found 1 / 1
Dec 28 12:30:34.244: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 28 12:30:34.258: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 12:30:34.258: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 28 12:30:34.264: INFO: wait on redis-master startup in e2e-tests-kubectl-8xn9t 
Dec 28 12:30:34.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-gkpvz redis-master --namespace=e2e-tests-kubectl-8xn9t'
Dec 28 12:30:34.502: INFO: stderr: ""
Dec 28 12:30:34.502: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 28 Dec 12:30:32.591 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 28 Dec 12:30:32.599 # Server started, Redis version 3.2.12\n1:M 28 Dec 12:30:32.600 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 28 Dec 12:30:32.600 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Dec 28 12:30:34.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-8xn9t'
Dec 28 12:30:34.750: INFO: stderr: ""
Dec 28 12:30:34.750: INFO: stdout: "service/rm2 exposed\n"
Dec 28 12:30:34.900: INFO: Service rm2 in namespace e2e-tests-kubectl-8xn9t found.
STEP: exposing service
Dec 28 12:30:36.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-8xn9t'
Dec 28 12:30:37.213: INFO: stderr: ""
Dec 28 12:30:37.214: INFO: stdout: "service/rm3 exposed\n"
Dec 28 12:30:37.221: INFO: Service rm3 in namespace e2e-tests-kubectl-8xn9t found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:30:39.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-8xn9t" for this suite.
Dec 28 12:31:05.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:31:05.363: INFO: namespace: e2e-tests-kubectl-8xn9t, resource: bindings, ignored listing per whitelist
Dec 28 12:31:05.585: INFO: namespace e2e-tests-kubectl-8xn9t deletion completed in 26.33587874s

• [SLOW TEST:46.696 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:31:05.586: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 28 12:31:05.832: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 29.166768ms)
Dec 28 12:31:05.842: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.077186ms)
Dec 28 12:31:05.854: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.10452ms)
Dec 28 12:31:05.866: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.590619ms)
Dec 28 12:31:05.877: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.569455ms)
Dec 28 12:31:05.885: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.473901ms)
Dec 28 12:31:05.892: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.442703ms)
Dec 28 12:31:05.905: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.749619ms)
Dec 28 12:31:05.914: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.759111ms)
Dec 28 12:31:05.923: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.071442ms)
Dec 28 12:31:05.931: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.980243ms)
Dec 28 12:31:05.939: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.233061ms)
Dec 28 12:31:05.957: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.431504ms)
Dec 28 12:31:05.965: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.846049ms)
Dec 28 12:31:05.975: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.182726ms)
Dec 28 12:31:05.982: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.343351ms)
Dec 28 12:31:05.989: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.466825ms)
Dec 28 12:31:05.995: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.201727ms)
Dec 28 12:31:06.005: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.483269ms)
Dec 28 12:31:06.016: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.820237ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:31:06.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-kqbk9" for this suite.
Dec 28 12:31:12.092: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:31:12.179: INFO: namespace: e2e-tests-proxy-kqbk9, resource: bindings, ignored listing per whitelist
Dec 28 12:31:12.227: INFO: namespace e2e-tests-proxy-kqbk9 deletion completed in 6.20223644s

• [SLOW TEST:6.642 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:31:12.228: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Dec 28 12:31:12.368: INFO: Waiting up to 5m0s for pod "pod-eed8a484-296d-11ea-8e71-0242ac110005" in namespace "e2e-tests-emptydir-xk4xk" to be "success or failure"
Dec 28 12:31:12.376: INFO: Pod "pod-eed8a484-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.773371ms
Dec 28 12:31:14.399: INFO: Pod "pod-eed8a484-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030352332s
Dec 28 12:31:16.417: INFO: Pod "pod-eed8a484-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048824055s
Dec 28 12:31:18.436: INFO: Pod "pod-eed8a484-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067176388s
Dec 28 12:31:20.768: INFO: Pod "pod-eed8a484-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.399268245s
Dec 28 12:31:22.777: INFO: Pod "pod-eed8a484-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.40804717s
Dec 28 12:31:24.849: INFO: Pod "pod-eed8a484-296d-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.480694058s
STEP: Saw pod success
Dec 28 12:31:24.849: INFO: Pod "pod-eed8a484-296d-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 12:31:24.887: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-eed8a484-296d-11ea-8e71-0242ac110005 container test-container: 
STEP: delete the pod
Dec 28 12:31:25.058: INFO: Waiting for pod pod-eed8a484-296d-11ea-8e71-0242ac110005 to disappear
Dec 28 12:31:25.073: INFO: Pod pod-eed8a484-296d-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:31:25.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-xk4xk" for this suite.
Dec 28 12:31:31.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:31:31.152: INFO: namespace: e2e-tests-emptydir-xk4xk, resource: bindings, ignored listing per whitelist
Dec 28 12:31:31.268: INFO: namespace e2e-tests-emptydir-xk4xk deletion completed in 6.188361583s

• [SLOW TEST:19.041 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:31:31.269: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 28 12:31:31.490: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fa3bfb96-296d-11ea-8e71-0242ac110005" in namespace "e2e-tests-downward-api-x929k" to be "success or failure"
Dec 28 12:31:31.575: INFO: Pod "downwardapi-volume-fa3bfb96-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 85.07709ms
Dec 28 12:31:33.597: INFO: Pod "downwardapi-volume-fa3bfb96-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106744684s
Dec 28 12:31:35.618: INFO: Pod "downwardapi-volume-fa3bfb96-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.127858069s
Dec 28 12:31:37.637: INFO: Pod "downwardapi-volume-fa3bfb96-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.147527229s
Dec 28 12:31:39.657: INFO: Pod "downwardapi-volume-fa3bfb96-296d-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.167298501s
Dec 28 12:31:41.684: INFO: Pod "downwardapi-volume-fa3bfb96-296d-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.194610237s
STEP: Saw pod success
Dec 28 12:31:41.684: INFO: Pod "downwardapi-volume-fa3bfb96-296d-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 12:31:41.689: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-fa3bfb96-296d-11ea-8e71-0242ac110005 container client-container: 
STEP: delete the pod
Dec 28 12:31:41.748: INFO: Waiting for pod downwardapi-volume-fa3bfb96-296d-11ea-8e71-0242ac110005 to disappear
Dec 28 12:31:41.817: INFO: Pod downwardapi-volume-fa3bfb96-296d-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:31:41.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-x929k" for this suite.
Dec 28 12:31:47.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:31:48.043: INFO: namespace: e2e-tests-downward-api-x929k, resource: bindings, ignored listing per whitelist
Dec 28 12:31:48.070: INFO: namespace e2e-tests-downward-api-x929k deletion completed in 6.234252214s

• [SLOW TEST:16.801 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:31:48.070: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:32:01.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-bblk8" for this suite.
Dec 28 12:32:25.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:32:25.743: INFO: namespace: e2e-tests-replication-controller-bblk8, resource: bindings, ignored listing per whitelist
Dec 28 12:32:25.941: INFO: namespace e2e-tests-replication-controller-bblk8 deletion completed in 24.269043063s

• [SLOW TEST:37.871 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:32:25.941: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Dec 28 12:32:38.852: INFO: Successfully updated pod "annotationupdate1acf6077-296e-11ea-8e71-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:32:40.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-2th68" for this suite.
Dec 28 12:33:05.028: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:33:05.192: INFO: namespace: e2e-tests-projected-2th68, resource: bindings, ignored listing per whitelist
Dec 28 12:33:05.244: INFO: namespace e2e-tests-projected-2th68 deletion completed in 24.26780825s

• [SLOW TEST:39.303 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:33:05.244: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-32598a02-296e-11ea-8e71-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 28 12:33:05.633: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-325b0083-296e-11ea-8e71-0242ac110005" in namespace "e2e-tests-projected-w6vw4" to be "success or failure"
Dec 28 12:33:05.650: INFO: Pod "pod-projected-configmaps-325b0083-296e-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.306161ms
Dec 28 12:33:07.804: INFO: Pod "pod-projected-configmaps-325b0083-296e-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.170703822s
Dec 28 12:33:09.848: INFO: Pod "pod-projected-configmaps-325b0083-296e-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.21544309s
Dec 28 12:33:12.473: INFO: Pod "pod-projected-configmaps-325b0083-296e-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.840077477s
Dec 28 12:33:14.510: INFO: Pod "pod-projected-configmaps-325b0083-296e-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.87764559s
Dec 28 12:33:16.557: INFO: Pod "pod-projected-configmaps-325b0083-296e-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.924143449s
STEP: Saw pod success
Dec 28 12:33:16.557: INFO: Pod "pod-projected-configmaps-325b0083-296e-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 12:33:16.575: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-325b0083-296e-11ea-8e71-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 28 12:33:16.914: INFO: Waiting for pod pod-projected-configmaps-325b0083-296e-11ea-8e71-0242ac110005 to disappear
Dec 28 12:33:16.925: INFO: Pod pod-projected-configmaps-325b0083-296e-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:33:16.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-w6vw4" for this suite.
Dec 28 12:33:22.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:33:23.207: INFO: namespace: e2e-tests-projected-w6vw4, resource: bindings, ignored listing per whitelist
Dec 28 12:33:23.243: INFO: namespace e2e-tests-projected-w6vw4 deletion completed in 6.309690844s

• [SLOW TEST:17.999 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:33:23.244: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 28 12:33:23.547: INFO: Pod name rollover-pod: Found 0 pods out of 1
Dec 28 12:33:29.110: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 28 12:33:33.143: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Dec 28 12:33:35.156: INFO: Creating deployment "test-rollover-deployment"
Dec 28 12:33:35.184: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Dec 28 12:33:37.199: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Dec 28 12:33:37.207: INFO: Ensure that both replica sets have 1 created replica
Dec 28 12:33:37.213: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Dec 28 12:33:37.225: INFO: Updating deployment test-rollover-deployment
Dec 28 12:33:37.225: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Dec 28 12:33:39.536: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Dec 28 12:33:39.546: INFO: Make sure deployment "test-rollover-deployment" is complete
Dec 28 12:33:39.553: INFO: all replica sets need to contain the pod-template-hash label
Dec 28 12:33:39.553: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713133215, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713133215, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713133218, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713133215, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 12:33:41.577: INFO: all replica sets need to contain the pod-template-hash label
Dec 28 12:33:41.577: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713133215, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713133215, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713133218, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713133215, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 12:33:43.577: INFO: all replica sets need to contain the pod-template-hash label
Dec 28 12:33:43.577: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713133215, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713133215, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713133218, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713133215, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 12:33:46.287: INFO: all replica sets need to contain the pod-template-hash label
Dec 28 12:33:46.287: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713133215, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713133215, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713133218, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713133215, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 12:33:47.572: INFO: all replica sets need to contain the pod-template-hash label
Dec 28 12:33:47.572: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713133215, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713133215, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713133218, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713133215, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 12:33:49.572: INFO: all replica sets need to contain the pod-template-hash label
Dec 28 12:33:49.572: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713133215, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713133215, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713133228, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713133215, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 12:33:51.581: INFO: all replica sets need to contain the pod-template-hash label
Dec 28 12:33:51.582: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713133215, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713133215, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713133228, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713133215, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 12:33:53.584: INFO: all replica sets need to contain the pod-template-hash label
Dec 28 12:33:53.584: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713133215, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713133215, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713133228, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713133215, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 12:33:55.579: INFO: all replica sets need to contain the pod-template-hash label
Dec 28 12:33:55.579: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713133215, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713133215, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713133228, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713133215, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 12:33:57.581: INFO: all replica sets need to contain the pod-template-hash label
Dec 28 12:33:57.581: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713133215, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713133215, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713133228, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713133215, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 28 12:34:00.061: INFO: 
Dec 28 12:34:00.061: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 28 12:34:00.424: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-k974j,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-k974j/deployments/test-rollover-deployment,UID:43f78e5e-296e-11ea-a994-fa163e34d433,ResourceVersion:16350439,Generation:2,CreationTimestamp:2019-12-28 12:33:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-28 12:33:35 +0000 UTC 2019-12-28 12:33:35 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-28 12:33:59 +0000 UTC 2019-12-28 12:33:35 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Dec 28 12:34:00.445: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-k974j,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-k974j/replicasets/test-rollover-deployment-5b8479fdb6,UID:45335624-296e-11ea-a994-fa163e34d433,ResourceVersion:16350430,Generation:2,CreationTimestamp:2019-12-28 12:33:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 43f78e5e-296e-11ea-a994-fa163e34d433 0xc001f99237 0xc001f99238}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 28 12:34:00.445: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Dec 28 12:34:00.445: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-k974j,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-k974j/replicasets/test-rollover-controller,UID:3d083b45-296e-11ea-a994-fa163e34d433,ResourceVersion:16350438,Generation:2,CreationTimestamp:2019-12-28 12:33:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 43f78e5e-296e-11ea-a994-fa163e34d433 0xc001f98857 0xc001f98858}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 28 12:34:00.445: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-k974j,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-k974j/replicasets/test-rollover-deployment-58494b7559,UID:44019204-296e-11ea-a994-fa163e34d433,ResourceVersion:16350391,Generation:2,CreationTimestamp:2019-12-28 12:33:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 43f78e5e-296e-11ea-a994-fa163e34d433 0xc001f98997 0xc001f98998}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 28 12:34:00.453: INFO: Pod "test-rollover-deployment-5b8479fdb6-xf8pf" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-xf8pf,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-k974j,SelfLink:/api/v1/namespaces/e2e-tests-deployment-k974j/pods/test-rollover-deployment-5b8479fdb6-xf8pf,UID:45b56aa4-296e-11ea-a994-fa163e34d433,ResourceVersion:16350415,Generation:0,CreationTimestamp:2019-12-28 12:33:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 45335624-296e-11ea-a994-fa163e34d433 0xc0020ec817 0xc0020ec818}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hbdd2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hbdd2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-hbdd2 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020ec880} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020ec8a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 12:33:38 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 12:33:48 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 12:33:48 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 12:33:38 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2019-12-28 12:33:38 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-28 12:33:47 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://ec21b5f42488233dc7081eeeaf5a48dfa4ded829fcc7f10542f149311af04fb9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:34:00.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-k974j" for this suite.
Dec 28 12:34:12.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:34:12.877: INFO: namespace: e2e-tests-deployment-k974j, resource: bindings, ignored listing per whitelist
Dec 28 12:34:12.877: INFO: namespace e2e-tests-deployment-k974j deletion completed in 12.406341266s

• [SLOW TEST:49.634 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:34:12.878: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-5a9f4cb4-296e-11ea-8e71-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-5a9f4d47-296e-11ea-8e71-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-5a9f4cb4-296e-11ea-8e71-0242ac110005
STEP: Updating configmap cm-test-opt-upd-5a9f4d47-296e-11ea-8e71-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-5a9f4d68-296e-11ea-8e71-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:35:41.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-hn7jt" for this suite.
Dec 28 12:36:05.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:36:05.616: INFO: namespace: e2e-tests-projected-hn7jt, resource: bindings, ignored listing per whitelist
Dec 28 12:36:05.779: INFO: namespace e2e-tests-projected-hn7jt deletion completed in 24.23492772s

• [SLOW TEST:112.901 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:36:05.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W1228 12:36:21.610973       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 28 12:36:21.611: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:36:21.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-tjpzb" for this suite.
Dec 28 12:36:46.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:36:46.997: INFO: namespace: e2e-tests-gc-tjpzb, resource: bindings, ignored listing per whitelist
Dec 28 12:36:47.008: INFO: namespace e2e-tests-gc-tjpzb deletion completed in 25.390634145s

• [SLOW TEST:41.229 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:36:47.008: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Dec 28 12:36:47.162: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:37:04.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-f9khg" for this suite.
Dec 28 12:37:10.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:37:10.594: INFO: namespace: e2e-tests-init-container-f9khg, resource: bindings, ignored listing per whitelist
Dec 28 12:37:10.773: INFO: namespace e2e-tests-init-container-f9khg deletion completed in 6.316587165s

• [SLOW TEST:23.764 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:37:10.773: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 28 12:37:11.013: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c4996aa6-296e-11ea-8e71-0242ac110005" in namespace "e2e-tests-downward-api-dtgxs" to be "success or failure"
Dec 28 12:37:11.124: INFO: Pod "downwardapi-volume-c4996aa6-296e-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 111.697977ms
Dec 28 12:37:13.292: INFO: Pod "downwardapi-volume-c4996aa6-296e-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.279076823s
Dec 28 12:37:15.310: INFO: Pod "downwardapi-volume-c4996aa6-296e-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.297270897s
Dec 28 12:37:17.703: INFO: Pod "downwardapi-volume-c4996aa6-296e-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.690275765s
Dec 28 12:37:19.721: INFO: Pod "downwardapi-volume-c4996aa6-296e-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.708709498s
Dec 28 12:37:22.048: INFO: Pod "downwardapi-volume-c4996aa6-296e-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.034954143s
STEP: Saw pod success
Dec 28 12:37:22.048: INFO: Pod "downwardapi-volume-c4996aa6-296e-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 12:37:22.062: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-c4996aa6-296e-11ea-8e71-0242ac110005 container client-container: 
STEP: delete the pod
Dec 28 12:37:22.492: INFO: Waiting for pod downwardapi-volume-c4996aa6-296e-11ea-8e71-0242ac110005 to disappear
Dec 28 12:37:22.505: INFO: Pod downwardapi-volume-c4996aa6-296e-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:37:22.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-dtgxs" for this suite.
Dec 28 12:37:28.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:37:28.698: INFO: namespace: e2e-tests-downward-api-dtgxs, resource: bindings, ignored listing per whitelist
Dec 28 12:37:28.884: INFO: namespace e2e-tests-downward-api-dtgxs deletion completed in 6.310214855s

• [SLOW TEST:18.111 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:37:28.884: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Dec 28 12:37:29.089: INFO: Waiting up to 5m0s for pod "pod-cf616614-296e-11ea-8e71-0242ac110005" in namespace "e2e-tests-emptydir-zdrw2" to be "success or failure"
Dec 28 12:37:29.110: INFO: Pod "pod-cf616614-296e-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.16932ms
Dec 28 12:37:31.234: INFO: Pod "pod-cf616614-296e-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144470265s
Dec 28 12:37:33.267: INFO: Pod "pod-cf616614-296e-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.177384018s
Dec 28 12:37:35.302: INFO: Pod "pod-cf616614-296e-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.212805632s
Dec 28 12:37:37.313: INFO: Pod "pod-cf616614-296e-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.223572463s
Dec 28 12:37:39.331: INFO: Pod "pod-cf616614-296e-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.241241487s
STEP: Saw pod success
Dec 28 12:37:39.331: INFO: Pod "pod-cf616614-296e-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 12:37:39.347: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-cf616614-296e-11ea-8e71-0242ac110005 container test-container: 
STEP: delete the pod
Dec 28 12:37:39.496: INFO: Waiting for pod pod-cf616614-296e-11ea-8e71-0242ac110005 to disappear
Dec 28 12:37:39.579: INFO: Pod pod-cf616614-296e-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:37:39.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-zdrw2" for this suite.
Dec 28 12:37:47.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:37:47.802: INFO: namespace: e2e-tests-emptydir-zdrw2, resource: bindings, ignored listing per whitelist
Dec 28 12:37:48.039: INFO: namespace e2e-tests-emptydir-zdrw2 deletion completed in 8.450727605s

• [SLOW TEST:19.155 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:37:48.039: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Dec 28 12:38:08.947: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 28 12:38:09.117: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 28 12:38:11.117: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 28 12:38:11.126: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 28 12:38:13.117: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 28 12:38:13.141: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 28 12:38:15.117: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 28 12:38:15.137: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 28 12:38:17.117: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 28 12:38:17.126: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 28 12:38:19.117: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 28 12:38:19.198: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 28 12:38:21.117: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 28 12:38:21.134: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 28 12:38:23.117: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 28 12:38:23.135: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 28 12:38:25.117: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 28 12:38:25.135: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 28 12:38:27.117: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 28 12:38:27.136: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 28 12:38:29.117: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 28 12:38:29.144: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 28 12:38:31.117: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 28 12:38:31.157: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:38:31.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-kdxkk" for this suite.
Dec 28 12:38:55.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:38:55.372: INFO: namespace: e2e-tests-container-lifecycle-hook-kdxkk, resource: bindings, ignored listing per whitelist
Dec 28 12:38:55.398: INFO: namespace e2e-tests-container-lifecycle-hook-kdxkk deletion completed in 24.200759695s

• [SLOW TEST:67.359 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:38:55.399: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Dec 28 12:38:55.605: INFO: Pod name pod-release: Found 0 pods out of 1
Dec 28 12:39:00.636: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:39:00.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-hg6ns" for this suite.
Dec 28 12:39:10.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:39:11.865: INFO: namespace: e2e-tests-replication-controller-hg6ns, resource: bindings, ignored listing per whitelist
Dec 28 12:39:12.080: INFO: namespace e2e-tests-replication-controller-hg6ns deletion completed in 11.258581715s

• [SLOW TEST:16.682 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:39:12.081: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 28 12:39:12.375: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0cf22336-296f-11ea-8e71-0242ac110005" in namespace "e2e-tests-projected-6ss54" to be "success or failure"
Dec 28 12:39:12.432: INFO: Pod "downwardapi-volume-0cf22336-296f-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 57.442654ms
Dec 28 12:39:14.699: INFO: Pod "downwardapi-volume-0cf22336-296f-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.32433134s
Dec 28 12:39:16.711: INFO: Pod "downwardapi-volume-0cf22336-296f-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.336687599s
Dec 28 12:39:19.889: INFO: Pod "downwardapi-volume-0cf22336-296f-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.514212084s
Dec 28 12:39:21.898: INFO: Pod "downwardapi-volume-0cf22336-296f-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.52309735s
Dec 28 12:39:23.913: INFO: Pod "downwardapi-volume-0cf22336-296f-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.538667043s
STEP: Saw pod success
Dec 28 12:39:23.913: INFO: Pod "downwardapi-volume-0cf22336-296f-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 12:39:23.919: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-0cf22336-296f-11ea-8e71-0242ac110005 container client-container: 
STEP: delete the pod
Dec 28 12:39:24.176: INFO: Waiting for pod downwardapi-volume-0cf22336-296f-11ea-8e71-0242ac110005 to disappear
Dec 28 12:39:24.234: INFO: Pod downwardapi-volume-0cf22336-296f-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:39:24.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-6ss54" for this suite.
Dec 28 12:39:30.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:39:30.674: INFO: namespace: e2e-tests-projected-6ss54, resource: bindings, ignored listing per whitelist
Dec 28 12:39:30.763: INFO: namespace e2e-tests-projected-6ss54 deletion completed in 6.515748667s

• [SLOW TEST:18.683 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:39:30.764: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 28 12:39:30.997: INFO: Waiting up to 5m0s for pod "downwardapi-volume-18093a1f-296f-11ea-8e71-0242ac110005" in namespace "e2e-tests-projected-6765n" to be "success or failure"
Dec 28 12:39:31.016: INFO: Pod "downwardapi-volume-18093a1f-296f-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.879876ms
Dec 28 12:39:33.195: INFO: Pod "downwardapi-volume-18093a1f-296f-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.198493662s
Dec 28 12:39:35.211: INFO: Pod "downwardapi-volume-18093a1f-296f-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.213697025s
Dec 28 12:39:37.299: INFO: Pod "downwardapi-volume-18093a1f-296f-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.301951759s
Dec 28 12:39:39.333: INFO: Pod "downwardapi-volume-18093a1f-296f-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.336244894s
Dec 28 12:39:41.368: INFO: Pod "downwardapi-volume-18093a1f-296f-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.370549645s
STEP: Saw pod success
Dec 28 12:39:41.368: INFO: Pod "downwardapi-volume-18093a1f-296f-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 12:39:41.384: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-18093a1f-296f-11ea-8e71-0242ac110005 container client-container: 
STEP: delete the pod
Dec 28 12:39:42.289: INFO: Waiting for pod downwardapi-volume-18093a1f-296f-11ea-8e71-0242ac110005 to disappear
Dec 28 12:39:42.595: INFO: Pod downwardapi-volume-18093a1f-296f-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:39:42.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-6765n" for this suite.
Dec 28 12:39:48.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:39:48.874: INFO: namespace: e2e-tests-projected-6765n, resource: bindings, ignored listing per whitelist
Dec 28 12:39:48.894: INFO: namespace e2e-tests-projected-6765n deletion completed in 6.244600719s

• [SLOW TEST:18.131 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:39:48.895: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Dec 28 12:39:49.083: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-px69c,SelfLink:/api/v1/namespaces/e2e-tests-watch-px69c/configmaps/e2e-watch-test-configmap-a,UID:22d6820b-296f-11ea-a994-fa163e34d433,ResourceVersion:16351240,Generation:0,CreationTimestamp:2019-12-28 12:39:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 28 12:39:49.084: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-px69c,SelfLink:/api/v1/namespaces/e2e-tests-watch-px69c/configmaps/e2e-watch-test-configmap-a,UID:22d6820b-296f-11ea-a994-fa163e34d433,ResourceVersion:16351240,Generation:0,CreationTimestamp:2019-12-28 12:39:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Dec 28 12:39:59.160: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-px69c,SelfLink:/api/v1/namespaces/e2e-tests-watch-px69c/configmaps/e2e-watch-test-configmap-a,UID:22d6820b-296f-11ea-a994-fa163e34d433,ResourceVersion:16351252,Generation:0,CreationTimestamp:2019-12-28 12:39:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Dec 28 12:39:59.161: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-px69c,SelfLink:/api/v1/namespaces/e2e-tests-watch-px69c/configmaps/e2e-watch-test-configmap-a,UID:22d6820b-296f-11ea-a994-fa163e34d433,ResourceVersion:16351252,Generation:0,CreationTimestamp:2019-12-28 12:39:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Dec 28 12:40:09.288: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-px69c,SelfLink:/api/v1/namespaces/e2e-tests-watch-px69c/configmaps/e2e-watch-test-configmap-a,UID:22d6820b-296f-11ea-a994-fa163e34d433,ResourceVersion:16351265,Generation:0,CreationTimestamp:2019-12-28 12:39:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 28 12:40:09.289: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-px69c,SelfLink:/api/v1/namespaces/e2e-tests-watch-px69c/configmaps/e2e-watch-test-configmap-a,UID:22d6820b-296f-11ea-a994-fa163e34d433,ResourceVersion:16351265,Generation:0,CreationTimestamp:2019-12-28 12:39:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Dec 28 12:40:19.322: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-px69c,SelfLink:/api/v1/namespaces/e2e-tests-watch-px69c/configmaps/e2e-watch-test-configmap-a,UID:22d6820b-296f-11ea-a994-fa163e34d433,ResourceVersion:16351278,Generation:0,CreationTimestamp:2019-12-28 12:39:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 28 12:40:19.322: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-px69c,SelfLink:/api/v1/namespaces/e2e-tests-watch-px69c/configmaps/e2e-watch-test-configmap-a,UID:22d6820b-296f-11ea-a994-fa163e34d433,ResourceVersion:16351278,Generation:0,CreationTimestamp:2019-12-28 12:39:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Dec 28 12:40:29.354: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-px69c,SelfLink:/api/v1/namespaces/e2e-tests-watch-px69c/configmaps/e2e-watch-test-configmap-b,UID:3ad4ab8e-296f-11ea-a994-fa163e34d433,ResourceVersion:16351291,Generation:0,CreationTimestamp:2019-12-28 12:40:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 28 12:40:29.354: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-px69c,SelfLink:/api/v1/namespaces/e2e-tests-watch-px69c/configmaps/e2e-watch-test-configmap-b,UID:3ad4ab8e-296f-11ea-a994-fa163e34d433,ResourceVersion:16351291,Generation:0,CreationTimestamp:2019-12-28 12:40:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Dec 28 12:40:39.385: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-px69c,SelfLink:/api/v1/namespaces/e2e-tests-watch-px69c/configmaps/e2e-watch-test-configmap-b,UID:3ad4ab8e-296f-11ea-a994-fa163e34d433,ResourceVersion:16351304,Generation:0,CreationTimestamp:2019-12-28 12:40:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 28 12:40:39.386: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-px69c,SelfLink:/api/v1/namespaces/e2e-tests-watch-px69c/configmaps/e2e-watch-test-configmap-b,UID:3ad4ab8e-296f-11ea-a994-fa163e34d433,ResourceVersion:16351304,Generation:0,CreationTimestamp:2019-12-28 12:40:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:40:49.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-px69c" for this suite.
Dec 28 12:40:57.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:40:57.563: INFO: namespace: e2e-tests-watch-px69c, resource: bindings, ignored listing per whitelist
Dec 28 12:40:57.673: INFO: namespace e2e-tests-watch-px69c deletion completed in 8.269743438s

• [SLOW TEST:68.778 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:40:57.673: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 28 12:40:57.970: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4be3fafd-296f-11ea-8e71-0242ac110005" in namespace "e2e-tests-projected-zn4r6" to be "success or failure"
Dec 28 12:40:58.028: INFO: Pod "downwardapi-volume-4be3fafd-296f-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 57.69025ms
Dec 28 12:41:00.332: INFO: Pod "downwardapi-volume-4be3fafd-296f-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.36140525s
Dec 28 12:41:02.379: INFO: Pod "downwardapi-volume-4be3fafd-296f-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.408353931s
Dec 28 12:41:04.602: INFO: Pod "downwardapi-volume-4be3fafd-296f-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.631459041s
Dec 28 12:41:06.678: INFO: Pod "downwardapi-volume-4be3fafd-296f-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.706992575s
Dec 28 12:41:08.772: INFO: Pod "downwardapi-volume-4be3fafd-296f-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.801477414s
Dec 28 12:41:11.226: INFO: Pod "downwardapi-volume-4be3fafd-296f-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.255827022s
STEP: Saw pod success
Dec 28 12:41:11.226: INFO: Pod "downwardapi-volume-4be3fafd-296f-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 12:41:11.248: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-4be3fafd-296f-11ea-8e71-0242ac110005 container client-container: 
STEP: delete the pod
Dec 28 12:41:11.634: INFO: Waiting for pod downwardapi-volume-4be3fafd-296f-11ea-8e71-0242ac110005 to disappear
Dec 28 12:41:11.649: INFO: Pod downwardapi-volume-4be3fafd-296f-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:41:11.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-zn4r6" for this suite.
Dec 28 12:41:17.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:41:17.889: INFO: namespace: e2e-tests-projected-zn4r6, resource: bindings, ignored listing per whitelist
Dec 28 12:41:17.908: INFO: namespace e2e-tests-projected-zn4r6 deletion completed in 6.245714712s

• [SLOW TEST:20.235 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:41:17.909: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-pkr4
STEP: Creating a pod to test atomic-volume-subpath
Dec 28 12:41:18.146: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-pkr4" in namespace "e2e-tests-subpath-w9jv5" to be "success or failure"
Dec 28 12:41:18.259: INFO: Pod "pod-subpath-test-secret-pkr4": Phase="Pending", Reason="", readiness=false. Elapsed: 112.910147ms
Dec 28 12:41:20.284: INFO: Pod "pod-subpath-test-secret-pkr4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137597011s
Dec 28 12:41:22.309: INFO: Pod "pod-subpath-test-secret-pkr4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.163220609s
Dec 28 12:41:24.364: INFO: Pod "pod-subpath-test-secret-pkr4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.218152814s
Dec 28 12:41:26.383: INFO: Pod "pod-subpath-test-secret-pkr4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.236556021s
Dec 28 12:41:28.397: INFO: Pod "pod-subpath-test-secret-pkr4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.251032276s
Dec 28 12:41:30.410: INFO: Pod "pod-subpath-test-secret-pkr4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.26331111s
Dec 28 12:41:32.419: INFO: Pod "pod-subpath-test-secret-pkr4": Phase="Pending", Reason="", readiness=false. Elapsed: 14.273234903s
Dec 28 12:41:34.579: INFO: Pod "pod-subpath-test-secret-pkr4": Phase="Running", Reason="", readiness=false. Elapsed: 16.432504812s
Dec 28 12:41:36.600: INFO: Pod "pod-subpath-test-secret-pkr4": Phase="Running", Reason="", readiness=false. Elapsed: 18.453824539s
Dec 28 12:41:38.622: INFO: Pod "pod-subpath-test-secret-pkr4": Phase="Running", Reason="", readiness=false. Elapsed: 20.475548449s
Dec 28 12:41:40.643: INFO: Pod "pod-subpath-test-secret-pkr4": Phase="Running", Reason="", readiness=false. Elapsed: 22.496614098s
Dec 28 12:41:42.669: INFO: Pod "pod-subpath-test-secret-pkr4": Phase="Running", Reason="", readiness=false. Elapsed: 24.522940468s
Dec 28 12:41:44.696: INFO: Pod "pod-subpath-test-secret-pkr4": Phase="Running", Reason="", readiness=false. Elapsed: 26.549895106s
Dec 28 12:41:46.715: INFO: Pod "pod-subpath-test-secret-pkr4": Phase="Running", Reason="", readiness=false. Elapsed: 28.568352733s
Dec 28 12:41:48.728: INFO: Pod "pod-subpath-test-secret-pkr4": Phase="Running", Reason="", readiness=false. Elapsed: 30.58161409s
Dec 28 12:41:50.756: INFO: Pod "pod-subpath-test-secret-pkr4": Phase="Running", Reason="", readiness=false. Elapsed: 32.610232448s
Dec 28 12:41:52.787: INFO: Pod "pod-subpath-test-secret-pkr4": Phase="Running", Reason="", readiness=false. Elapsed: 34.640726994s
Dec 28 12:41:54.797: INFO: Pod "pod-subpath-test-secret-pkr4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.650934835s
STEP: Saw pod success
Dec 28 12:41:54.797: INFO: Pod "pod-subpath-test-secret-pkr4" satisfied condition "success or failure"
Dec 28 12:41:54.801: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-pkr4 container test-container-subpath-secret-pkr4: 
STEP: delete the pod
Dec 28 12:41:55.585: INFO: Waiting for pod pod-subpath-test-secret-pkr4 to disappear
Dec 28 12:41:55.906: INFO: Pod pod-subpath-test-secret-pkr4 no longer exists
STEP: Deleting pod pod-subpath-test-secret-pkr4
Dec 28 12:41:55.907: INFO: Deleting pod "pod-subpath-test-secret-pkr4" in namespace "e2e-tests-subpath-w9jv5"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:41:55.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-w9jv5" for this suite.
Dec 28 12:42:02.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:42:02.326: INFO: namespace: e2e-tests-subpath-w9jv5, resource: bindings, ignored listing per whitelist
Dec 28 12:42:02.413: INFO: namespace e2e-tests-subpath-w9jv5 deletion completed in 6.453300097s

• [SLOW TEST:44.505 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:42:02.414: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 28 12:42:02.840: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Dec 28 12:42:08.781: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 28 12:42:13.590: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 28 12:42:13.740: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-xxtkg,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xxtkg/deployments/test-cleanup-deployment,UID:78fdea31-296f-11ea-a994-fa163e34d433,ResourceVersion:16351500,Generation:1,CreationTimestamp:2019-12-28 12:42:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Dec 28 12:42:13.757: INFO: New ReplicaSet "test-cleanup-deployment-6df768c57" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-6df768c57,GenerateName:,Namespace:e2e-tests-deployment-xxtkg,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xxtkg/replicasets/test-cleanup-deployment-6df768c57,UID:790f9f4e-296f-11ea-a994-fa163e34d433,ResourceVersion:16351502,Generation:1,CreationTimestamp:2019-12-28 12:42:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 6df768c57,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 78fdea31-296f-11ea-a994-fa163e34d433 0xc0020ec930 0xc0020ec931}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 6df768c57,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 6df768c57,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 28 12:42:13.757: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Dec 28 12:42:13.758: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-xxtkg,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xxtkg/replicasets/test-cleanup-controller,UID:727d75e0-296f-11ea-a994-fa163e34d433,ResourceVersion:16351501,Generation:1,CreationTimestamp:2019-12-28 12:42:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 78fdea31-296f-11ea-a994-fa163e34d433 0xc0020ec867 0xc0020ec868}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 28 12:42:13.806: INFO: Pod "test-cleanup-controller-ksdjk" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-ksdjk,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-xxtkg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xxtkg/pods/test-cleanup-controller-ksdjk,UID:72931bb1-296f-11ea-a994-fa163e34d433,ResourceVersion:16351496,Generation:0,CreationTimestamp:2019-12-28 12:42:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 727d75e0-296f-11ea-a994-fa163e34d433 0xc001cdcce7 0xc001cdcce8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rjvxs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rjvxs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rjvxs true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001cdcd80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001cdcdd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 12:42:02 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 12:42:12 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 12:42:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-28 12:42:02 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2019-12-28 12:42:02 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-28 12:42:11 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://81184306a0e06795913d9ec4446cf5b6093ffe627959303e71225e3308d11b17}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 28 12:42:13.807: INFO: Pod "test-cleanup-deployment-6df768c57-xkbcc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-6df768c57-xkbcc,GenerateName:test-cleanup-deployment-6df768c57-,Namespace:e2e-tests-deployment-xxtkg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xxtkg/pods/test-cleanup-deployment-6df768c57-xkbcc,UID:7910f495-296f-11ea-a994-fa163e34d433,ResourceVersion:16351503,Generation:0,CreationTimestamp:2019-12-28 12:42:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 6df768c57,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-6df768c57 790f9f4e-296f-11ea-a994-fa163e34d433 0xc001cdcea0 0xc001cdcea1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rjvxs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rjvxs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-rjvxs true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001cdcf60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001cdcf80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:42:13.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-xxtkg" for this suite.
Dec 28 12:42:25.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:42:26.014: INFO: namespace: e2e-tests-deployment-xxtkg, resource: bindings, ignored listing per whitelist
Dec 28 12:42:26.091: INFO: namespace e2e-tests-deployment-xxtkg deletion completed in 12.167965933s

• [SLOW TEST:23.677 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:42:26.092: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-s4l5
STEP: Creating a pod to test atomic-volume-subpath
Dec 28 12:42:27.998: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-s4l5" in namespace "e2e-tests-subpath-mft9c" to be "success or failure"
Dec 28 12:42:28.033: INFO: Pod "pod-subpath-test-downwardapi-s4l5": Phase="Pending", Reason="", readiness=false. Elapsed: 35.360371ms
Dec 28 12:42:30.106: INFO: Pod "pod-subpath-test-downwardapi-s4l5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108173279s
Dec 28 12:42:32.130: INFO: Pod "pod-subpath-test-downwardapi-s4l5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.132504946s
Dec 28 12:42:34.292: INFO: Pod "pod-subpath-test-downwardapi-s4l5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.294194512s
Dec 28 12:42:36.313: INFO: Pod "pod-subpath-test-downwardapi-s4l5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.315350168s
Dec 28 12:42:38.348: INFO: Pod "pod-subpath-test-downwardapi-s4l5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.350734816s
Dec 28 12:42:40.377: INFO: Pod "pod-subpath-test-downwardapi-s4l5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.379167647s
Dec 28 12:42:42.388: INFO: Pod "pod-subpath-test-downwardapi-s4l5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.39053679s
Dec 28 12:42:44.399: INFO: Pod "pod-subpath-test-downwardapi-s4l5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.40172554s
Dec 28 12:42:46.425: INFO: Pod "pod-subpath-test-downwardapi-s4l5": Phase="Running", Reason="", readiness=false. Elapsed: 18.427511108s
Dec 28 12:42:48.443: INFO: Pod "pod-subpath-test-downwardapi-s4l5": Phase="Running", Reason="", readiness=false. Elapsed: 20.444797183s
Dec 28 12:42:50.457: INFO: Pod "pod-subpath-test-downwardapi-s4l5": Phase="Running", Reason="", readiness=false. Elapsed: 22.45881625s
Dec 28 12:42:52.482: INFO: Pod "pod-subpath-test-downwardapi-s4l5": Phase="Running", Reason="", readiness=false. Elapsed: 24.484675993s
Dec 28 12:42:54.584: INFO: Pod "pod-subpath-test-downwardapi-s4l5": Phase="Running", Reason="", readiness=false. Elapsed: 26.586785272s
Dec 28 12:42:56.618: INFO: Pod "pod-subpath-test-downwardapi-s4l5": Phase="Running", Reason="", readiness=false. Elapsed: 28.62069661s
Dec 28 12:42:58.634: INFO: Pod "pod-subpath-test-downwardapi-s4l5": Phase="Running", Reason="", readiness=false. Elapsed: 30.636041138s
Dec 28 12:43:00.654: INFO: Pod "pod-subpath-test-downwardapi-s4l5": Phase="Running", Reason="", readiness=false. Elapsed: 32.655813602s
Dec 28 12:43:02.705: INFO: Pod "pod-subpath-test-downwardapi-s4l5": Phase="Running", Reason="", readiness=false. Elapsed: 34.707320747s
Dec 28 12:43:04.713: INFO: Pod "pod-subpath-test-downwardapi-s4l5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.715374507s
STEP: Saw pod success
Dec 28 12:43:04.713: INFO: Pod "pod-subpath-test-downwardapi-s4l5" satisfied condition "success or failure"
Dec 28 12:43:04.719: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-s4l5 container test-container-subpath-downwardapi-s4l5: 
STEP: delete the pod
Dec 28 12:43:05.455: INFO: Waiting for pod pod-subpath-test-downwardapi-s4l5 to disappear
Dec 28 12:43:05.874: INFO: Pod pod-subpath-test-downwardapi-s4l5 no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-s4l5
Dec 28 12:43:05.874: INFO: Deleting pod "pod-subpath-test-downwardapi-s4l5" in namespace "e2e-tests-subpath-mft9c"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:43:05.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-mft9c" for this suite.
Dec 28 12:43:13.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:43:14.081: INFO: namespace: e2e-tests-subpath-mft9c, resource: bindings, ignored listing per whitelist
Dec 28 12:43:14.178: INFO: namespace e2e-tests-subpath-mft9c deletion completed in 8.261702406s

• [SLOW TEST:48.086 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:43:14.178: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:44:14.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-m8dvt" for this suite.
Dec 28 12:44:38.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:44:38.703: INFO: namespace: e2e-tests-container-probe-m8dvt, resource: bindings, ignored listing per whitelist
Dec 28 12:44:38.750: INFO: namespace e2e-tests-container-probe-m8dvt deletion completed in 24.344271509s

• [SLOW TEST:84.571 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:44:38.750: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-8k8pn in namespace e2e-tests-proxy-dhmlh
I1228 12:44:39.164415       8 runners.go:184] Created replication controller with name: proxy-service-8k8pn, namespace: e2e-tests-proxy-dhmlh, replica count: 1
I1228 12:44:40.214952       8 runners.go:184] proxy-service-8k8pn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1228 12:44:41.215257       8 runners.go:184] proxy-service-8k8pn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1228 12:44:42.215482       8 runners.go:184] proxy-service-8k8pn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1228 12:44:43.216060       8 runners.go:184] proxy-service-8k8pn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1228 12:44:44.216409       8 runners.go:184] proxy-service-8k8pn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1228 12:44:45.216721       8 runners.go:184] proxy-service-8k8pn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1228 12:44:46.217291       8 runners.go:184] proxy-service-8k8pn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1228 12:44:47.217656       8 runners.go:184] proxy-service-8k8pn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1228 12:44:48.217996       8 runners.go:184] proxy-service-8k8pn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1228 12:44:49.218272       8 runners.go:184] proxy-service-8k8pn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1228 12:44:50.218510       8 runners.go:184] proxy-service-8k8pn Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1228 12:44:51.218829       8 runners.go:184] proxy-service-8k8pn Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1228 12:44:52.219318       8 runners.go:184] proxy-service-8k8pn Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1228 12:44:53.219668       8 runners.go:184] proxy-service-8k8pn Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1228 12:44:54.219947       8 runners.go:184] proxy-service-8k8pn Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Dec 28 12:44:54.230: INFO: setup took 15.244434872s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Dec 28 12:44:54.258: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-dhmlh/services/proxy-service-8k8pn:portname1/proxy/: foo (200; 27.817039ms)
Dec 28 12:44:54.258: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-dhmlh/pods/http:proxy-service-8k8pn-njf2n:162/proxy/: bar (200; 28.314123ms)
Dec 28 12:44:54.275: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-dhmlh/pods/proxy-service-8k8pn-njf2n:1080/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W1228 12:45:38.702511       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 28 12:45:38.702: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:45:38.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-nxl89" for this suite.
Dec 28 12:45:46.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:45:46.848: INFO: namespace: e2e-tests-gc-nxl89, resource: bindings, ignored listing per whitelist
Dec 28 12:45:46.899: INFO: namespace e2e-tests-gc-nxl89 deletion completed in 8.192441226s

• [SLOW TEST:39.300 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:45:46.900: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Dec 28 12:45:47.386: INFO: Waiting up to 5m0s for pod "client-containers-f864e089-296f-11ea-8e71-0242ac110005" in namespace "e2e-tests-containers-f5svd" to be "success or failure"
Dec 28 12:45:47.396: INFO: Pod "client-containers-f864e089-296f-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.78354ms
Dec 28 12:45:49.613: INFO: Pod "client-containers-f864e089-296f-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.226821803s
Dec 28 12:45:51.627: INFO: Pod "client-containers-f864e089-296f-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.240376411s
Dec 28 12:45:53.679: INFO: Pod "client-containers-f864e089-296f-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.293156632s
Dec 28 12:45:55.951: INFO: Pod "client-containers-f864e089-296f-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.564834525s
Dec 28 12:45:57.965: INFO: Pod "client-containers-f864e089-296f-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.578707799s
Dec 28 12:45:59.985: INFO: Pod "client-containers-f864e089-296f-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.599060501s
STEP: Saw pod success
Dec 28 12:45:59.985: INFO: Pod "client-containers-f864e089-296f-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 12:46:00.002: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-f864e089-296f-11ea-8e71-0242ac110005 container test-container: 
STEP: delete the pod
Dec 28 12:46:00.314: INFO: Waiting for pod client-containers-f864e089-296f-11ea-8e71-0242ac110005 to disappear
Dec 28 12:46:00.455: INFO: Pod client-containers-f864e089-296f-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:46:00.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-f5svd" for this suite.
Dec 28 12:46:07.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:46:07.226: INFO: namespace: e2e-tests-containers-f5svd, resource: bindings, ignored listing per whitelist
Dec 28 12:46:07.246: INFO: namespace e2e-tests-containers-f5svd deletion completed in 6.542283134s

• [SLOW TEST:20.346 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:46:07.246: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Dec 28 12:46:33.732: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-xpjgm PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 28 12:46:33.732: INFO: >>> kubeConfig: /root/.kube/config
Dec 28 12:46:34.321: INFO: Exec stderr: ""
Dec 28 12:46:34.322: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-xpjgm PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 28 12:46:34.322: INFO: >>> kubeConfig: /root/.kube/config
Dec 28 12:46:34.795: INFO: Exec stderr: ""
Dec 28 12:46:34.795: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-xpjgm PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 28 12:46:34.795: INFO: >>> kubeConfig: /root/.kube/config
Dec 28 12:46:35.167: INFO: Exec stderr: ""
Dec 28 12:46:35.167: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-xpjgm PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 28 12:46:35.167: INFO: >>> kubeConfig: /root/.kube/config
Dec 28 12:46:35.498: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Dec 28 12:46:35.498: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-xpjgm PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 28 12:46:35.498: INFO: >>> kubeConfig: /root/.kube/config
Dec 28 12:46:35.804: INFO: Exec stderr: ""
Dec 28 12:46:35.804: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-xpjgm PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 28 12:46:35.804: INFO: >>> kubeConfig: /root/.kube/config
Dec 28 12:46:36.168: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Dec 28 12:46:36.169: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-xpjgm PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 28 12:46:36.169: INFO: >>> kubeConfig: /root/.kube/config
Dec 28 12:46:36.600: INFO: Exec stderr: ""
Dec 28 12:46:36.600: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-xpjgm PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 28 12:46:36.600: INFO: >>> kubeConfig: /root/.kube/config
Dec 28 12:46:36.951: INFO: Exec stderr: ""
Dec 28 12:46:36.952: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-xpjgm PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 28 12:46:36.952: INFO: >>> kubeConfig: /root/.kube/config
Dec 28 12:46:37.234: INFO: Exec stderr: ""
Dec 28 12:46:37.234: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-xpjgm PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 28 12:46:37.234: INFO: >>> kubeConfig: /root/.kube/config
Dec 28 12:46:37.566: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:46:37.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-xpjgm" for this suite.
Dec 28 12:47:33.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:47:33.785: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-xpjgm, resource: bindings, ignored listing per whitelist
Dec 28 12:47:33.898: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-xpjgm deletion completed in 56.299909214s

• [SLOW TEST:86.652 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:47:33.899: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Dec 28 12:47:34.285: INFO: Waiting up to 5m0s for pod "pod-381962b6-2970-11ea-8e71-0242ac110005" in namespace "e2e-tests-emptydir-sv2fq" to be "success or failure"
Dec 28 12:47:34.444: INFO: Pod "pod-381962b6-2970-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 158.848202ms
Dec 28 12:47:36.537: INFO: Pod "pod-381962b6-2970-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.252249381s
Dec 28 12:47:38.564: INFO: Pod "pod-381962b6-2970-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.27909346s
Dec 28 12:47:40.997: INFO: Pod "pod-381962b6-2970-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.712267149s
Dec 28 12:47:43.011: INFO: Pod "pod-381962b6-2970-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.726153813s
Dec 28 12:47:45.041: INFO: Pod "pod-381962b6-2970-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.755907786s
STEP: Saw pod success
Dec 28 12:47:45.041: INFO: Pod "pod-381962b6-2970-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 12:47:45.052: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-381962b6-2970-11ea-8e71-0242ac110005 container test-container: 
STEP: delete the pod
Dec 28 12:47:45.216: INFO: Waiting for pod pod-381962b6-2970-11ea-8e71-0242ac110005 to disappear
Dec 28 12:47:45.233: INFO: Pod pod-381962b6-2970-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:47:45.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-sv2fq" for this suite.
Dec 28 12:47:51.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:47:51.469: INFO: namespace: e2e-tests-emptydir-sv2fq, resource: bindings, ignored listing per whitelist
Dec 28 12:47:51.642: INFO: namespace e2e-tests-emptydir-sv2fq deletion completed in 6.394355394s

• [SLOW TEST:17.743 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:47:51.642: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 28 12:47:51.846: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4291842a-2970-11ea-8e71-0242ac110005" in namespace "e2e-tests-projected-5xhkk" to be "success or failure"
Dec 28 12:47:51.993: INFO: Pod "downwardapi-volume-4291842a-2970-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 146.594597ms
Dec 28 12:47:54.011: INFO: Pod "downwardapi-volume-4291842a-2970-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.165210796s
Dec 28 12:47:56.024: INFO: Pod "downwardapi-volume-4291842a-2970-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178405379s
Dec 28 12:47:58.039: INFO: Pod "downwardapi-volume-4291842a-2970-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.193285022s
Dec 28 12:48:00.064: INFO: Pod "downwardapi-volume-4291842a-2970-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.218226696s
Dec 28 12:48:02.074: INFO: Pod "downwardapi-volume-4291842a-2970-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.228379239s
STEP: Saw pod success
Dec 28 12:48:02.074: INFO: Pod "downwardapi-volume-4291842a-2970-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 12:48:02.079: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-4291842a-2970-11ea-8e71-0242ac110005 container client-container: 
STEP: delete the pod
Dec 28 12:48:02.435: INFO: Waiting for pod downwardapi-volume-4291842a-2970-11ea-8e71-0242ac110005 to disappear
Dec 28 12:48:02.463: INFO: Pod downwardapi-volume-4291842a-2970-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:48:02.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-5xhkk" for this suite.
Dec 28 12:48:08.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:48:08.672: INFO: namespace: e2e-tests-projected-5xhkk, resource: bindings, ignored listing per whitelist
Dec 28 12:48:08.680: INFO: namespace e2e-tests-projected-5xhkk deletion completed in 6.203342929s

• [SLOW TEST:17.038 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:48:08.681: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 28 12:48:08.800: INFO: Creating ReplicaSet my-hostname-basic-4cb19fe9-2970-11ea-8e71-0242ac110005
Dec 28 12:48:08.897: INFO: Pod name my-hostname-basic-4cb19fe9-2970-11ea-8e71-0242ac110005: Found 0 pods out of 1
Dec 28 12:48:14.069: INFO: Pod name my-hostname-basic-4cb19fe9-2970-11ea-8e71-0242ac110005: Found 1 pods out of 1
Dec 28 12:48:14.069: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-4cb19fe9-2970-11ea-8e71-0242ac110005" is running
Dec 28 12:48:20.114: INFO: Pod "my-hostname-basic-4cb19fe9-2970-11ea-8e71-0242ac110005-k4cq4" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 12:48:09 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 12:48:09 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-4cb19fe9-2970-11ea-8e71-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 12:48:09 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-4cb19fe9-2970-11ea-8e71-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-28 12:48:08 +0000 UTC Reason: Message:}])
Dec 28 12:48:20.114: INFO: Trying to dial the pod
Dec 28 12:48:25.157: INFO: Controller my-hostname-basic-4cb19fe9-2970-11ea-8e71-0242ac110005: Got expected result from replica 1 [my-hostname-basic-4cb19fe9-2970-11ea-8e71-0242ac110005-k4cq4]: "my-hostname-basic-4cb19fe9-2970-11ea-8e71-0242ac110005-k4cq4", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:48:25.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-4swkv" for this suite.
Dec 28 12:48:31.278: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:48:31.443: INFO: namespace: e2e-tests-replicaset-4swkv, resource: bindings, ignored listing per whitelist
Dec 28 12:48:31.465: INFO: namespace e2e-tests-replicaset-4swkv deletion completed in 6.301689648s

• [SLOW TEST:22.785 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:48:31.466: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 28 12:48:31.650: INFO: Waiting up to 5m0s for pod "downward-api-5a4db8fc-2970-11ea-8e71-0242ac110005" in namespace "e2e-tests-downward-api-ncf8m" to be "success or failure"
Dec 28 12:48:31.656: INFO: Pod "downward-api-5a4db8fc-2970-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.299235ms
Dec 28 12:48:33.816: INFO: Pod "downward-api-5a4db8fc-2970-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.165392467s
Dec 28 12:48:35.827: INFO: Pod "downward-api-5a4db8fc-2970-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.177358662s
Dec 28 12:48:37.883: INFO: Pod "downward-api-5a4db8fc-2970-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.232876985s
Dec 28 12:48:39.912: INFO: Pod "downward-api-5a4db8fc-2970-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.261601935s
Dec 28 12:48:41.942: INFO: Pod "downward-api-5a4db8fc-2970-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.292052314s
Dec 28 12:48:43.953: INFO: Pod "downward-api-5a4db8fc-2970-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.303093399s
STEP: Saw pod success
Dec 28 12:48:43.953: INFO: Pod "downward-api-5a4db8fc-2970-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 12:48:43.957: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-5a4db8fc-2970-11ea-8e71-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 28 12:48:44.569: INFO: Waiting for pod downward-api-5a4db8fc-2970-11ea-8e71-0242ac110005 to disappear
Dec 28 12:48:44.874: INFO: Pod downward-api-5a4db8fc-2970-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:48:44.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-ncf8m" for this suite.
Dec 28 12:48:51.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:48:51.183: INFO: namespace: e2e-tests-downward-api-ncf8m, resource: bindings, ignored listing per whitelist
Dec 28 12:48:51.313: INFO: namespace e2e-tests-downward-api-ncf8m deletion completed in 6.372523488s

• [SLOW TEST:19.847 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:48:51.314: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-66287e88-2970-11ea-8e71-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 28 12:48:51.546: INFO: Waiting up to 5m0s for pod "pod-configmaps-6629cea8-2970-11ea-8e71-0242ac110005" in namespace "e2e-tests-configmap-6d4sm" to be "success or failure"
Dec 28 12:48:51.556: INFO: Pod "pod-configmaps-6629cea8-2970-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.30575ms
Dec 28 12:48:53.824: INFO: Pod "pod-configmaps-6629cea8-2970-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.277963252s
Dec 28 12:48:55.850: INFO: Pod "pod-configmaps-6629cea8-2970-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.303877672s
Dec 28 12:48:58.005: INFO: Pod "pod-configmaps-6629cea8-2970-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.459422274s
Dec 28 12:49:00.297: INFO: Pod "pod-configmaps-6629cea8-2970-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.750582378s
Dec 28 12:49:02.447: INFO: Pod "pod-configmaps-6629cea8-2970-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.901085756s
Dec 28 12:49:04.482: INFO: Pod "pod-configmaps-6629cea8-2970-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.935638833s
STEP: Saw pod success
Dec 28 12:49:04.482: INFO: Pod "pod-configmaps-6629cea8-2970-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 12:49:04.494: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-6629cea8-2970-11ea-8e71-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 28 12:49:04.838: INFO: Waiting for pod pod-configmaps-6629cea8-2970-11ea-8e71-0242ac110005 to disappear
Dec 28 12:49:04.861: INFO: Pod pod-configmaps-6629cea8-2970-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:49:04.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-6d4sm" for this suite.
Dec 28 12:49:13.109: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:49:13.230: INFO: namespace: e2e-tests-configmap-6d4sm, resource: bindings, ignored listing per whitelist
Dec 28 12:49:13.266: INFO: namespace e2e-tests-configmap-6d4sm deletion completed in 8.377743615s

• [SLOW TEST:21.953 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:49:13.267: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-7341d7fc-2970-11ea-8e71-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 28 12:49:13.648: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7342f9b6-2970-11ea-8e71-0242ac110005" in namespace "e2e-tests-projected-c42vb" to be "success or failure"
Dec 28 12:49:13.659: INFO: Pod "pod-projected-secrets-7342f9b6-2970-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.509878ms
Dec 28 12:49:15.817: INFO: Pod "pod-projected-secrets-7342f9b6-2970-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.168356239s
Dec 28 12:49:17.857: INFO: Pod "pod-projected-secrets-7342f9b6-2970-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.208355168s
Dec 28 12:49:19.872: INFO: Pod "pod-projected-secrets-7342f9b6-2970-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.223669073s
Dec 28 12:49:21.894: INFO: Pod "pod-projected-secrets-7342f9b6-2970-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.245721441s
Dec 28 12:49:23.906: INFO: Pod "pod-projected-secrets-7342f9b6-2970-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.257504801s
STEP: Saw pod success
Dec 28 12:49:23.906: INFO: Pod "pod-projected-secrets-7342f9b6-2970-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 12:49:23.910: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-7342f9b6-2970-11ea-8e71-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Dec 28 12:49:24.575: INFO: Waiting for pod pod-projected-secrets-7342f9b6-2970-11ea-8e71-0242ac110005 to disappear
Dec 28 12:49:24.607: INFO: Pod pod-projected-secrets-7342f9b6-2970-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:49:24.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-c42vb" for this suite.
Dec 28 12:49:31.109: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:49:31.369: INFO: namespace: e2e-tests-projected-c42vb, resource: bindings, ignored listing per whitelist
Dec 28 12:49:31.426: INFO: namespace e2e-tests-projected-c42vb deletion completed in 6.80509414s

• [SLOW TEST:18.159 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:49:31.427: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: executing a command with run --rm and attach with stdin
Dec 28 12:49:31.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-hqr6z run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Dec 28 12:49:45.365: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n"
Dec 28 12:49:45.366: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:49:47.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-hqr6z" for this suite.
Dec 28 12:49:53.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:49:53.911: INFO: namespace: e2e-tests-kubectl-hqr6z, resource: bindings, ignored listing per whitelist
Dec 28 12:49:54.017: INFO: namespace e2e-tests-kubectl-hqr6z deletion completed in 6.311498667s

• [SLOW TEST:22.590 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:49:54.018: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:49:54.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-wprk5" for this suite.
Dec 28 12:50:00.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:50:00.652: INFO: namespace: e2e-tests-services-wprk5, resource: bindings, ignored listing per whitelist
Dec 28 12:50:00.656: INFO: namespace e2e-tests-services-wprk5 deletion completed in 6.293207737s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:6.639 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:50:00.657: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 28 12:50:00.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-7c66q'
Dec 28 12:50:00.930: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 28 12:50:00.930: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Dec 28 12:50:00.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-7c66q'
Dec 28 12:50:01.100: INFO: stderr: ""
Dec 28 12:50:01.100: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:50:01.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-7c66q" for this suite.
Dec 28 12:50:25.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:50:25.302: INFO: namespace: e2e-tests-kubectl-7c66q, resource: bindings, ignored listing per whitelist
Dec 28 12:50:25.332: INFO: namespace e2e-tests-kubectl-7c66q deletion completed in 24.212971064s

• [SLOW TEST:24.676 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:50:25.333: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-9e37b2c5-2970-11ea-8e71-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 28 12:50:25.593: INFO: Waiting up to 5m0s for pod "pod-secrets-9e388196-2970-11ea-8e71-0242ac110005" in namespace "e2e-tests-secrets-kg62m" to be "success or failure"
Dec 28 12:50:25.685: INFO: Pod "pod-secrets-9e388196-2970-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 91.632992ms
Dec 28 12:50:28.106: INFO: Pod "pod-secrets-9e388196-2970-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.512827147s
Dec 28 12:50:30.124: INFO: Pod "pod-secrets-9e388196-2970-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.530064328s
Dec 28 12:50:32.365: INFO: Pod "pod-secrets-9e388196-2970-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.771569852s
Dec 28 12:50:34.423: INFO: Pod "pod-secrets-9e388196-2970-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.829892389s
Dec 28 12:50:36.438: INFO: Pod "pod-secrets-9e388196-2970-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.844419073s
STEP: Saw pod success
Dec 28 12:50:36.438: INFO: Pod "pod-secrets-9e388196-2970-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 12:50:36.441: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-9e388196-2970-11ea-8e71-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 28 12:50:37.456: INFO: Waiting for pod pod-secrets-9e388196-2970-11ea-8e71-0242ac110005 to disappear
Dec 28 12:50:38.013: INFO: Pod pod-secrets-9e388196-2970-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:50:38.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-kg62m" for this suite.
Dec 28 12:50:44.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:50:44.514: INFO: namespace: e2e-tests-secrets-kg62m, resource: bindings, ignored listing per whitelist
Dec 28 12:50:44.576: INFO: namespace e2e-tests-secrets-kg62m deletion completed in 6.525991943s

• [SLOW TEST:19.243 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:50:44.577: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 28 12:50:44.787: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a9a8e1de-2970-11ea-8e71-0242ac110005" in namespace "e2e-tests-downward-api-kjz24" to be "success or failure"
Dec 28 12:50:44.802: INFO: Pod "downwardapi-volume-a9a8e1de-2970-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.889453ms
Dec 28 12:50:46.815: INFO: Pod "downwardapi-volume-a9a8e1de-2970-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027980473s
Dec 28 12:50:48.861: INFO: Pod "downwardapi-volume-a9a8e1de-2970-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073813832s
Dec 28 12:50:51.101: INFO: Pod "downwardapi-volume-a9a8e1de-2970-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.314366859s
Dec 28 12:50:53.126: INFO: Pod "downwardapi-volume-a9a8e1de-2970-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.338819998s
Dec 28 12:50:55.147: INFO: Pod "downwardapi-volume-a9a8e1de-2970-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.35976458s
STEP: Saw pod success
Dec 28 12:50:55.147: INFO: Pod "downwardapi-volume-a9a8e1de-2970-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 12:50:55.161: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-a9a8e1de-2970-11ea-8e71-0242ac110005 container client-container: 
STEP: delete the pod
Dec 28 12:50:55.222: INFO: Waiting for pod downwardapi-volume-a9a8e1de-2970-11ea-8e71-0242ac110005 to disappear
Dec 28 12:50:55.299: INFO: Pod downwardapi-volume-a9a8e1de-2970-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:50:55.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-kjz24" for this suite.
Dec 28 12:51:01.364: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:51:01.532: INFO: namespace: e2e-tests-downward-api-kjz24, resource: bindings, ignored listing per whitelist
Dec 28 12:51:01.592: INFO: namespace e2e-tests-downward-api-kjz24 deletion completed in 6.280900317s

• [SLOW TEST:17.015 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:51:01.592: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-pgfk
STEP: Creating a pod to test atomic-volume-subpath
Dec 28 12:51:01.893: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-pgfk" in namespace "e2e-tests-subpath-gkwdl" to be "success or failure"
Dec 28 12:51:01.912: INFO: Pod "pod-subpath-test-configmap-pgfk": Phase="Pending", Reason="", readiness=false. Elapsed: 19.276511ms
Dec 28 12:51:03.975: INFO: Pod "pod-subpath-test-configmap-pgfk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081598775s
Dec 28 12:51:05.997: INFO: Pod "pod-subpath-test-configmap-pgfk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103795678s
Dec 28 12:51:08.611: INFO: Pod "pod-subpath-test-configmap-pgfk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.717391565s
Dec 28 12:51:10.739: INFO: Pod "pod-subpath-test-configmap-pgfk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.845836951s
Dec 28 12:51:12.810: INFO: Pod "pod-subpath-test-configmap-pgfk": Phase="Pending", Reason="", readiness=false. Elapsed: 10.917023312s
Dec 28 12:51:14.836: INFO: Pod "pod-subpath-test-configmap-pgfk": Phase="Pending", Reason="", readiness=false. Elapsed: 12.94262076s
Dec 28 12:51:17.009: INFO: Pod "pod-subpath-test-configmap-pgfk": Phase="Pending", Reason="", readiness=false. Elapsed: 15.116134182s
Dec 28 12:51:19.019: INFO: Pod "pod-subpath-test-configmap-pgfk": Phase="Pending", Reason="", readiness=false. Elapsed: 17.125752225s
Dec 28 12:51:21.038: INFO: Pod "pod-subpath-test-configmap-pgfk": Phase="Pending", Reason="", readiness=false. Elapsed: 19.144903525s
Dec 28 12:51:23.051: INFO: Pod "pod-subpath-test-configmap-pgfk": Phase="Running", Reason="", readiness=false. Elapsed: 21.157569479s
Dec 28 12:51:25.066: INFO: Pod "pod-subpath-test-configmap-pgfk": Phase="Running", Reason="", readiness=false. Elapsed: 23.172981085s
Dec 28 12:51:27.081: INFO: Pod "pod-subpath-test-configmap-pgfk": Phase="Running", Reason="", readiness=false. Elapsed: 25.187630762s
Dec 28 12:51:29.097: INFO: Pod "pod-subpath-test-configmap-pgfk": Phase="Running", Reason="", readiness=false. Elapsed: 27.204224031s
Dec 28 12:51:31.115: INFO: Pod "pod-subpath-test-configmap-pgfk": Phase="Running", Reason="", readiness=false. Elapsed: 29.221879395s
Dec 28 12:51:33.137: INFO: Pod "pod-subpath-test-configmap-pgfk": Phase="Running", Reason="", readiness=false. Elapsed: 31.243859445s
Dec 28 12:51:35.159: INFO: Pod "pod-subpath-test-configmap-pgfk": Phase="Running", Reason="", readiness=false. Elapsed: 33.266051263s
Dec 28 12:51:37.180: INFO: Pod "pod-subpath-test-configmap-pgfk": Phase="Running", Reason="", readiness=false. Elapsed: 35.286397519s
Dec 28 12:51:39.209: INFO: Pod "pod-subpath-test-configmap-pgfk": Phase="Running", Reason="", readiness=false. Elapsed: 37.315670554s
Dec 28 12:51:41.227: INFO: Pod "pod-subpath-test-configmap-pgfk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 39.333556998s
STEP: Saw pod success
Dec 28 12:51:41.227: INFO: Pod "pod-subpath-test-configmap-pgfk" satisfied condition "success or failure"
Dec 28 12:51:41.236: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-pgfk container test-container-subpath-configmap-pgfk: 
STEP: delete the pod
Dec 28 12:51:42.125: INFO: Waiting for pod pod-subpath-test-configmap-pgfk to disappear
Dec 28 12:51:42.631: INFO: Pod pod-subpath-test-configmap-pgfk no longer exists
STEP: Deleting pod pod-subpath-test-configmap-pgfk
Dec 28 12:51:42.631: INFO: Deleting pod "pod-subpath-test-configmap-pgfk" in namespace "e2e-tests-subpath-gkwdl"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:51:42.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-gkwdl" for this suite.
Dec 28 12:51:48.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:51:49.063: INFO: namespace: e2e-tests-subpath-gkwdl, resource: bindings, ignored listing per whitelist
Dec 28 12:51:49.258: INFO: namespace e2e-tests-subpath-gkwdl deletion completed in 6.479970921s

• [SLOW TEST:47.666 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:51:49.259: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Dec 28 12:51:49.521: INFO: Waiting up to 5m0s for pod "pod-d03b9cf3-2970-11ea-8e71-0242ac110005" in namespace "e2e-tests-emptydir-ps9rb" to be "success or failure"
Dec 28 12:51:49.621: INFO: Pod "pod-d03b9cf3-2970-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 99.598832ms
Dec 28 12:51:52.004: INFO: Pod "pod-d03b9cf3-2970-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.482917865s
Dec 28 12:51:54.043: INFO: Pod "pod-d03b9cf3-2970-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.521677568s
Dec 28 12:51:56.181: INFO: Pod "pod-d03b9cf3-2970-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.659455477s
Dec 28 12:51:58.219: INFO: Pod "pod-d03b9cf3-2970-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.697391249s
Dec 28 12:52:00.239: INFO: Pod "pod-d03b9cf3-2970-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.717322645s
Dec 28 12:52:02.713: INFO: Pod "pod-d03b9cf3-2970-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.191423882s
STEP: Saw pod success
Dec 28 12:52:02.713: INFO: Pod "pod-d03b9cf3-2970-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 12:52:02.737: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-d03b9cf3-2970-11ea-8e71-0242ac110005 container test-container: 
STEP: delete the pod
Dec 28 12:52:03.164: INFO: Waiting for pod pod-d03b9cf3-2970-11ea-8e71-0242ac110005 to disappear
Dec 28 12:52:03.184: INFO: Pod pod-d03b9cf3-2970-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:52:03.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-ps9rb" for this suite.
Dec 28 12:52:09.253: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:52:09.284: INFO: namespace: e2e-tests-emptydir-ps9rb, resource: bindings, ignored listing per whitelist
Dec 28 12:52:09.370: INFO: namespace e2e-tests-emptydir-ps9rb deletion completed in 6.179539503s

• [SLOW TEST:20.111 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:52:09.371: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Dec 28 12:52:09.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-5zcpb'
Dec 28 12:52:09.902: INFO: stderr: ""
Dec 28 12:52:09.902: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 28 12:52:11.986: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 12:52:11.986: INFO: Found 0 / 1
Dec 28 12:52:12.997: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 12:52:12.997: INFO: Found 0 / 1
Dec 28 12:52:13.928: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 12:52:13.928: INFO: Found 0 / 1
Dec 28 12:52:14.941: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 12:52:14.941: INFO: Found 0 / 1
Dec 28 12:52:16.360: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 12:52:16.360: INFO: Found 0 / 1
Dec 28 12:52:17.652: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 12:52:17.652: INFO: Found 0 / 1
Dec 28 12:52:18.385: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 12:52:18.385: INFO: Found 0 / 1
Dec 28 12:52:19.043: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 12:52:19.043: INFO: Found 0 / 1
Dec 28 12:52:19.924: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 12:52:19.924: INFO: Found 0 / 1
Dec 28 12:52:20.922: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 12:52:20.922: INFO: Found 0 / 1
Dec 28 12:52:21.937: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 12:52:21.937: INFO: Found 1 / 1
Dec 28 12:52:21.937: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Dec 28 12:52:21.946: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 12:52:21.946: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 28 12:52:21.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-7fxgj --namespace=e2e-tests-kubectl-5zcpb -p {"metadata":{"annotations":{"x":"y"}}}'
Dec 28 12:52:22.170: INFO: stderr: ""
Dec 28 12:52:22.170: INFO: stdout: "pod/redis-master-7fxgj patched\n"
STEP: checking annotations
Dec 28 12:52:22.195: INFO: Selector matched 1 pods for map[app:redis]
Dec 28 12:52:22.195: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:52:22.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-5zcpb" for this suite.
Dec 28 12:52:58.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:52:58.627: INFO: namespace: e2e-tests-kubectl-5zcpb, resource: bindings, ignored listing per whitelist
Dec 28 12:52:58.676: INFO: namespace e2e-tests-kubectl-5zcpb deletion completed in 36.473669407s

• [SLOW TEST:49.306 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:52:58.678: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Dec 28 12:53:18.998: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 28 12:53:19.010: INFO: Pod pod-with-prestop-http-hook still exists
Dec 28 12:53:21.010: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 28 12:53:21.021: INFO: Pod pod-with-prestop-http-hook still exists
Dec 28 12:53:23.014: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 28 12:53:23.083: INFO: Pod pod-with-prestop-http-hook still exists
Dec 28 12:53:25.010: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 28 12:53:25.021: INFO: Pod pod-with-prestop-http-hook still exists
Dec 28 12:53:27.010: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 28 12:53:27.025: INFO: Pod pod-with-prestop-http-hook still exists
Dec 28 12:53:29.010: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 28 12:53:29.025: INFO: Pod pod-with-prestop-http-hook still exists
Dec 28 12:53:31.010: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 28 12:53:31.028: INFO: Pod pod-with-prestop-http-hook still exists
Dec 28 12:53:33.010: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 28 12:53:33.026: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:53:33.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-r55wj" for this suite.
Dec 28 12:53:57.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:53:57.397: INFO: namespace: e2e-tests-container-lifecycle-hook-r55wj, resource: bindings, ignored listing per whitelist
Dec 28 12:53:57.404: INFO: namespace e2e-tests-container-lifecycle-hook-r55wj deletion completed in 24.320652148s

• [SLOW TEST:58.726 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:53:57.404: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Dec 28 12:53:57.601: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix376737017/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:53:57.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-pzt8h" for this suite.
Dec 28 12:54:03.859: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:54:03.933: INFO: namespace: e2e-tests-kubectl-pzt8h, resource: bindings, ignored listing per whitelist
Dec 28 12:54:04.053: INFO: namespace e2e-tests-kubectl-pzt8h deletion completed in 6.252629885s

• [SLOW TEST:6.649 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:54:04.054: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-dt4kz
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 28 12:54:04.245: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 28 12:54:46.614: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-dt4kz PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 28 12:54:46.614: INFO: >>> kubeConfig: /root/.kube/config
Dec 28 12:54:48.171: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:54:48.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-dt4kz" for this suite.
Dec 28 12:55:12.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:55:12.340: INFO: namespace: e2e-tests-pod-network-test-dt4kz, resource: bindings, ignored listing per whitelist
Dec 28 12:55:12.394: INFO: namespace e2e-tests-pod-network-test-dt4kz deletion completed in 24.153412962s

• [SLOW TEST:68.340 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:55:12.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Dec 28 12:55:12.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wtv5h'
Dec 28 12:55:13.080: INFO: stderr: ""
Dec 28 12:55:13.080: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 28 12:55:13.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wtv5h'
Dec 28 12:55:13.433: INFO: stderr: ""
Dec 28 12:55:13.433: INFO: stdout: "update-demo-nautilus-cc927 update-demo-nautilus-s9k4x "
Dec 28 12:55:13.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cc927 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wtv5h'
Dec 28 12:55:13.701: INFO: stderr: ""
Dec 28 12:55:13.701: INFO: stdout: ""
Dec 28 12:55:13.701: INFO: update-demo-nautilus-cc927 is created but not running
Dec 28 12:55:18.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wtv5h'
Dec 28 12:55:21.700: INFO: stderr: ""
Dec 28 12:55:21.700: INFO: stdout: "update-demo-nautilus-cc927 update-demo-nautilus-s9k4x "
Dec 28 12:55:21.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cc927 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wtv5h'
Dec 28 12:55:22.598: INFO: stderr: ""
Dec 28 12:55:22.598: INFO: stdout: ""
Dec 28 12:55:22.598: INFO: update-demo-nautilus-cc927 is created but not running
Dec 28 12:55:27.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wtv5h'
Dec 28 12:55:27.697: INFO: stderr: ""
Dec 28 12:55:27.697: INFO: stdout: "update-demo-nautilus-cc927 update-demo-nautilus-s9k4x "
Dec 28 12:55:27.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cc927 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wtv5h'
Dec 28 12:55:27.778: INFO: stderr: ""
Dec 28 12:55:27.778: INFO: stdout: ""
Dec 28 12:55:27.778: INFO: update-demo-nautilus-cc927 is created but not running
Dec 28 12:55:32.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wtv5h'
Dec 28 12:55:33.842: INFO: stderr: ""
Dec 28 12:55:33.842: INFO: stdout: "update-demo-nautilus-cc927 update-demo-nautilus-s9k4x "
Dec 28 12:55:33.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cc927 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wtv5h'
Dec 28 12:55:34.350: INFO: stderr: ""
Dec 28 12:55:34.350: INFO: stdout: ""
Dec 28 12:55:34.350: INFO: update-demo-nautilus-cc927 is created but not running
Dec 28 12:55:39.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wtv5h'
Dec 28 12:55:39.592: INFO: stderr: ""
Dec 28 12:55:39.592: INFO: stdout: "update-demo-nautilus-cc927 update-demo-nautilus-s9k4x "
Dec 28 12:55:39.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cc927 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wtv5h'
Dec 28 12:55:39.709: INFO: stderr: ""
Dec 28 12:55:39.709: INFO: stdout: "true"
Dec 28 12:55:39.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cc927 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wtv5h'
Dec 28 12:55:39.809: INFO: stderr: ""
Dec 28 12:55:39.809: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 28 12:55:39.809: INFO: validating pod update-demo-nautilus-cc927
Dec 28 12:55:39.931: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 28 12:55:39.931: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 28 12:55:39.931: INFO: update-demo-nautilus-cc927 is verified up and running
Dec 28 12:55:39.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s9k4x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wtv5h'
Dec 28 12:55:40.069: INFO: stderr: ""
Dec 28 12:55:40.069: INFO: stdout: "true"
Dec 28 12:55:40.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s9k4x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wtv5h'
Dec 28 12:55:40.379: INFO: stderr: ""
Dec 28 12:55:40.379: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 28 12:55:40.379: INFO: validating pod update-demo-nautilus-s9k4x
Dec 28 12:55:40.415: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 28 12:55:40.415: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 28 12:55:40.415: INFO: update-demo-nautilus-s9k4x is verified up and running
STEP: scaling down the replication controller
Dec 28 12:55:40.424: INFO: scanned /root for discovery docs: 
Dec 28 12:55:40.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-wtv5h'
Dec 28 12:55:41.895: INFO: stderr: ""
Dec 28 12:55:41.895: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 28 12:55:41.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wtv5h'
Dec 28 12:55:42.025: INFO: stderr: ""
Dec 28 12:55:42.025: INFO: stdout: "update-demo-nautilus-cc927 update-demo-nautilus-s9k4x "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 28 12:55:47.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wtv5h'
Dec 28 12:55:48.658: INFO: stderr: ""
Dec 28 12:55:48.658: INFO: stdout: "update-demo-nautilus-cc927 update-demo-nautilus-s9k4x "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 28 12:55:53.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wtv5h'
Dec 28 12:55:53.819: INFO: stderr: ""
Dec 28 12:55:53.819: INFO: stdout: "update-demo-nautilus-cc927 update-demo-nautilus-s9k4x "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 28 12:55:58.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wtv5h'
Dec 28 12:55:58.956: INFO: stderr: ""
Dec 28 12:55:58.956: INFO: stdout: "update-demo-nautilus-cc927 update-demo-nautilus-s9k4x "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 28 12:56:03.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wtv5h'
Dec 28 12:56:04.154: INFO: stderr: ""
Dec 28 12:56:04.154: INFO: stdout: "update-demo-nautilus-cc927 "
Dec 28 12:56:04.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cc927 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wtv5h'
Dec 28 12:56:04.266: INFO: stderr: ""
Dec 28 12:56:04.266: INFO: stdout: "true"
Dec 28 12:56:04.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cc927 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wtv5h'
Dec 28 12:56:04.378: INFO: stderr: ""
Dec 28 12:56:04.378: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 28 12:56:04.378: INFO: validating pod update-demo-nautilus-cc927
Dec 28 12:56:04.386: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 28 12:56:04.386: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 28 12:56:04.386: INFO: update-demo-nautilus-cc927 is verified up and running
STEP: scaling up the replication controller
Dec 28 12:56:04.388: INFO: scanned /root for discovery docs: 
Dec 28 12:56:04.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-wtv5h'
Dec 28 12:56:06.694: INFO: stderr: ""
Dec 28 12:56:06.694: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 28 12:56:06.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wtv5h'
Dec 28 12:56:07.036: INFO: stderr: ""
Dec 28 12:56:07.036: INFO: stdout: "update-demo-nautilus-9sd5j update-demo-nautilus-cc927 "
Dec 28 12:56:07.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9sd5j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wtv5h'
Dec 28 12:56:07.153: INFO: stderr: ""
Dec 28 12:56:07.153: INFO: stdout: ""
Dec 28 12:56:07.153: INFO: update-demo-nautilus-9sd5j is created but not running
Dec 28 12:56:12.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wtv5h'
Dec 28 12:56:12.659: INFO: stderr: ""
Dec 28 12:56:12.659: INFO: stdout: "update-demo-nautilus-9sd5j update-demo-nautilus-cc927 "
Dec 28 12:56:12.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9sd5j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wtv5h'
Dec 28 12:56:12.841: INFO: stderr: ""
Dec 28 12:56:12.841: INFO: stdout: ""
Dec 28 12:56:12.841: INFO: update-demo-nautilus-9sd5j is created but not running
Dec 28 12:56:17.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wtv5h'
Dec 28 12:56:18.023: INFO: stderr: ""
Dec 28 12:56:18.023: INFO: stdout: "update-demo-nautilus-9sd5j update-demo-nautilus-cc927 "
Dec 28 12:56:18.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9sd5j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wtv5h'
Dec 28 12:56:18.121: INFO: stderr: ""
Dec 28 12:56:18.121: INFO: stdout: "true"
Dec 28 12:56:18.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9sd5j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wtv5h'
Dec 28 12:56:18.292: INFO: stderr: ""
Dec 28 12:56:18.292: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 28 12:56:18.292: INFO: validating pod update-demo-nautilus-9sd5j
Dec 28 12:56:18.357: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 28 12:56:18.357: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 28 12:56:18.357: INFO: update-demo-nautilus-9sd5j is verified up and running
Dec 28 12:56:18.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cc927 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wtv5h'
Dec 28 12:56:18.704: INFO: stderr: ""
Dec 28 12:56:18.704: INFO: stdout: "true"
Dec 28 12:56:18.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cc927 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wtv5h'
Dec 28 12:56:18.825: INFO: stderr: ""
Dec 28 12:56:18.825: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 28 12:56:18.825: INFO: validating pod update-demo-nautilus-cc927
Dec 28 12:56:18.845: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 28 12:56:18.845: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 28 12:56:18.845: INFO: update-demo-nautilus-cc927 is verified up and running
STEP: using delete to clean up resources
Dec 28 12:56:18.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wtv5h'
Dec 28 12:56:19.085: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 28 12:56:19.085: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Dec 28 12:56:19.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-wtv5h'
Dec 28 12:56:19.373: INFO: stderr: "No resources found.\n"
Dec 28 12:56:19.373: INFO: stdout: ""
Dec 28 12:56:19.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-wtv5h -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 28 12:56:19.537: INFO: stderr: ""
Dec 28 12:56:19.538: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:56:19.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-wtv5h" for this suite.
Dec 28 12:56:43.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:56:43.944: INFO: namespace: e2e-tests-kubectl-wtv5h, resource: bindings, ignored listing per whitelist
Dec 28 12:56:43.968: INFO: namespace e2e-tests-kubectl-wtv5h deletion completed in 24.370432282s

• [SLOW TEST:91.573 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:56:43.968: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 28 12:57:18.288: INFO: Container started at 2019-12-28 12:56:54 +0000 UTC, pod became ready at 2019-12-28 12:57:17 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:57:18.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-kbjss" for this suite.
Dec 28 12:57:42.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:57:42.471: INFO: namespace: e2e-tests-container-probe-kbjss, resource: bindings, ignored listing per whitelist
Dec 28 12:57:42.498: INFO: namespace e2e-tests-container-probe-kbjss deletion completed in 24.202721216s

• [SLOW TEST:58.529 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:57:42.498: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Dec 28 12:57:42.730: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:58:13.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-8lpc8" for this suite.
Dec 28 12:58:39.701: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:58:39.791: INFO: namespace: e2e-tests-init-container-8lpc8, resource: bindings, ignored listing per whitelist
Dec 28 12:58:39.898: INFO: namespace e2e-tests-init-container-8lpc8 deletion completed in 26.355040671s

• [SLOW TEST:57.400 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:58:39.899: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 28 12:58:40.805: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"c54e8229-2971-11ea-a994-fa163e34d433", Controller:(*bool)(0xc0020ed482), BlockOwnerDeletion:(*bool)(0xc0020ed483)}}
Dec 28 12:58:40.939: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"c534462f-2971-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001f99822), BlockOwnerDeletion:(*bool)(0xc001f99823)}}
Dec 28 12:58:41.141: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"c5386588-2971-11ea-a994-fa163e34d433", Controller:(*bool)(0xc0020ed6b2), BlockOwnerDeletion:(*bool)(0xc0020ed6b3)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:58:46.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-5wqsp" for this suite.
Dec 28 12:58:56.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 12:58:56.471: INFO: namespace: e2e-tests-gc-5wqsp, resource: bindings, ignored listing per whitelist
Dec 28 12:58:56.649: INFO: namespace e2e-tests-gc-5wqsp deletion completed in 10.443551969s

• [SLOW TEST:16.750 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 12:58:56.650: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-wk6fp
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 28 12:58:57.088: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 28 12:59:35.411: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-wk6fp PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 28 12:59:35.411: INFO: >>> kubeConfig: /root/.kube/config
Dec 28 12:59:36.290: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 12:59:36.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-wk6fp" for this suite.
Dec 28 13:00:00.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:00:00.441: INFO: namespace: e2e-tests-pod-network-test-wk6fp, resource: bindings, ignored listing per whitelist
Dec 28 13:00:00.529: INFO: namespace e2e-tests-pod-network-test-wk6fp deletion completed in 24.217019074s

• [SLOW TEST:63.879 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 13:00:00.529: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-f517cddd-2971-11ea-8e71-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 28 13:00:00.954: INFO: Waiting up to 5m0s for pod "pod-secrets-f519289b-2971-11ea-8e71-0242ac110005" in namespace "e2e-tests-secrets-mcw8h" to be "success or failure"
Dec 28 13:00:00.981: INFO: Pod "pod-secrets-f519289b-2971-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 26.820289ms
Dec 28 13:00:03.072: INFO: Pod "pod-secrets-f519289b-2971-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117573477s
Dec 28 13:00:05.100: INFO: Pod "pod-secrets-f519289b-2971-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.144984352s
Dec 28 13:00:07.116: INFO: Pod "pod-secrets-f519289b-2971-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.161269193s
Dec 28 13:00:09.411: INFO: Pod "pod-secrets-f519289b-2971-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.456691945s
Dec 28 13:00:11.429: INFO: Pod "pod-secrets-f519289b-2971-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.474372149s
Dec 28 13:00:13.448: INFO: Pod "pod-secrets-f519289b-2971-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.49353901s
Dec 28 13:00:15.521: INFO: Pod "pod-secrets-f519289b-2971-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.566423833s
Dec 28 13:00:17.532: INFO: Pod "pod-secrets-f519289b-2971-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.577213562s
STEP: Saw pod success
Dec 28 13:00:17.532: INFO: Pod "pod-secrets-f519289b-2971-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 13:00:17.534: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-f519289b-2971-11ea-8e71-0242ac110005 container secret-env-test: 
STEP: delete the pod
Dec 28 13:00:17.610: INFO: Waiting for pod pod-secrets-f519289b-2971-11ea-8e71-0242ac110005 to disappear
Dec 28 13:00:17.623: INFO: Pod pod-secrets-f519289b-2971-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 13:00:17.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-mcw8h" for this suite.
Dec 28 13:00:25.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:00:25.475: INFO: namespace: e2e-tests-secrets-mcw8h, resource: bindings, ignored listing per whitelist
Dec 28 13:00:25.503: INFO: namespace e2e-tests-secrets-mcw8h deletion completed in 7.870913372s

• [SLOW TEST:24.974 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 13:00:25.504: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 28 13:00:25.994: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0413795f-2972-11ea-8e71-0242ac110005" in namespace "e2e-tests-projected-v4j2p" to be "success or failure"
Dec 28 13:00:26.236: INFO: Pod "downwardapi-volume-0413795f-2972-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 242.153853ms
Dec 28 13:00:28.526: INFO: Pod "downwardapi-volume-0413795f-2972-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.532309551s
Dec 28 13:00:30.645: INFO: Pod "downwardapi-volume-0413795f-2972-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.650808631s
Dec 28 13:00:32.676: INFO: Pod "downwardapi-volume-0413795f-2972-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.681586683s
Dec 28 13:00:34.685: INFO: Pod "downwardapi-volume-0413795f-2972-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.69112842s
Dec 28 13:00:37.104: INFO: Pod "downwardapi-volume-0413795f-2972-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.109472522s
Dec 28 13:00:39.117: INFO: Pod "downwardapi-volume-0413795f-2972-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.12286937s
Dec 28 13:00:41.189: INFO: Pod "downwardapi-volume-0413795f-2972-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.195164887s
STEP: Saw pod success
Dec 28 13:00:41.189: INFO: Pod "downwardapi-volume-0413795f-2972-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 13:00:41.196: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-0413795f-2972-11ea-8e71-0242ac110005 container client-container: 
STEP: delete the pod
Dec 28 13:00:41.594: INFO: Waiting for pod downwardapi-volume-0413795f-2972-11ea-8e71-0242ac110005 to disappear
Dec 28 13:00:41.636: INFO: Pod downwardapi-volume-0413795f-2972-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 13:00:41.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-v4j2p" for this suite.
Dec 28 13:00:49.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:00:50.094: INFO: namespace: e2e-tests-projected-v4j2p, resource: bindings, ignored listing per whitelist
Dec 28 13:00:50.163: INFO: namespace e2e-tests-projected-v4j2p deletion completed in 8.385303625s

• [SLOW TEST:24.659 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 13:00:50.163: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Dec 28 13:00:50.946: INFO: Waiting up to 5m0s for pod "pod-12f438b2-2972-11ea-8e71-0242ac110005" in namespace "e2e-tests-emptydir-8zdjn" to be "success or failure"
Dec 28 13:00:50.977: INFO: Pod "pod-12f438b2-2972-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 30.638927ms
Dec 28 13:00:53.094: INFO: Pod "pod-12f438b2-2972-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.148292021s
Dec 28 13:00:55.130: INFO: Pod "pod-12f438b2-2972-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.183788417s
Dec 28 13:00:57.151: INFO: Pod "pod-12f438b2-2972-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.204431108s
Dec 28 13:00:59.273: INFO: Pod "pod-12f438b2-2972-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.326595404s
Dec 28 13:01:01.374: INFO: Pod "pod-12f438b2-2972-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.42825836s
Dec 28 13:01:03.453: INFO: Pod "pod-12f438b2-2972-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.507235705s
Dec 28 13:01:06.524: INFO: Pod "pod-12f438b2-2972-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.578051154s
Dec 28 13:01:08.710: INFO: Pod "pod-12f438b2-2972-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.764294894s
STEP: Saw pod success
Dec 28 13:01:08.711: INFO: Pod "pod-12f438b2-2972-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 13:01:08.761: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-12f438b2-2972-11ea-8e71-0242ac110005 container test-container: 
STEP: delete the pod
Dec 28 13:01:10.841: INFO: Waiting for pod pod-12f438b2-2972-11ea-8e71-0242ac110005 to disappear
Dec 28 13:01:11.133: INFO: Pod pod-12f438b2-2972-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 13:01:11.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-8zdjn" for this suite.
Dec 28 13:01:19.382: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:01:19.483: INFO: namespace: e2e-tests-emptydir-8zdjn, resource: bindings, ignored listing per whitelist
Dec 28 13:01:19.559: INFO: namespace e2e-tests-emptydir-8zdjn deletion completed in 8.337271958s

• [SLOW TEST:29.396 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 13:01:19.560: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 28 13:01:19.844: INFO: Waiting up to 5m0s for pod "downwardapi-volume-242f2b19-2972-11ea-8e71-0242ac110005" in namespace "e2e-tests-projected-5f8s8" to be "success or failure"
Dec 28 13:01:19.857: INFO: Pod "downwardapi-volume-242f2b19-2972-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.365134ms
Dec 28 13:01:22.616: INFO: Pod "downwardapi-volume-242f2b19-2972-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.771758393s
Dec 28 13:01:24.667: INFO: Pod "downwardapi-volume-242f2b19-2972-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.822587687s
Dec 28 13:01:26.677: INFO: Pod "downwardapi-volume-242f2b19-2972-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.833325343s
Dec 28 13:01:29.857: INFO: Pod "downwardapi-volume-242f2b19-2972-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.012451142s
Dec 28 13:01:31.877: INFO: Pod "downwardapi-volume-242f2b19-2972-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.033223021s
Dec 28 13:01:34.366: INFO: Pod "downwardapi-volume-242f2b19-2972-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.522272666s
Dec 28 13:01:36.380: INFO: Pod "downwardapi-volume-242f2b19-2972-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.536381379s
STEP: Saw pod success
Dec 28 13:01:36.381: INFO: Pod "downwardapi-volume-242f2b19-2972-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 13:01:36.394: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-242f2b19-2972-11ea-8e71-0242ac110005 container client-container: 
STEP: delete the pod
Dec 28 13:01:36.834: INFO: Waiting for pod downwardapi-volume-242f2b19-2972-11ea-8e71-0242ac110005 to disappear
Dec 28 13:01:36.965: INFO: Pod downwardapi-volume-242f2b19-2972-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 13:01:36.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-5f8s8" for this suite.
Dec 28 13:01:45.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:01:45.162: INFO: namespace: e2e-tests-projected-5f8s8, resource: bindings, ignored listing per whitelist
Dec 28 13:01:45.273: INFO: namespace e2e-tests-projected-5f8s8 deletion completed in 8.292770885s

• [SLOW TEST:25.713 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 13:01:45.273: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Dec 28 13:01:58.384: INFO: Successfully updated pod "labelsupdate338e56ce-2972-11ea-8e71-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 13:02:00.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-6nrqf" for this suite.
Dec 28 13:02:24.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:02:24.657: INFO: namespace: e2e-tests-downward-api-6nrqf, resource: bindings, ignored listing per whitelist
Dec 28 13:02:24.709: INFO: namespace e2e-tests-downward-api-6nrqf deletion completed in 24.192614681s

• [SLOW TEST:39.436 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 13:02:24.709: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Dec 28 13:02:24.909: INFO: Waiting up to 5m0s for pod "pod-4ae9321b-2972-11ea-8e71-0242ac110005" in namespace "e2e-tests-emptydir-f5xm9" to be "success or failure"
Dec 28 13:02:24.936: INFO: Pod "pod-4ae9321b-2972-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 26.155383ms
Dec 28 13:02:26.956: INFO: Pod "pod-4ae9321b-2972-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046917508s
Dec 28 13:02:28.982: INFO: Pod "pod-4ae9321b-2972-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072968298s
Dec 28 13:02:31.003: INFO: Pod "pod-4ae9321b-2972-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093451702s
Dec 28 13:02:33.970: INFO: Pod "pod-4ae9321b-2972-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.060486621s
Dec 28 13:02:35.997: INFO: Pod "pod-4ae9321b-2972-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.088005219s
Dec 28 13:02:38.019: INFO: Pod "pod-4ae9321b-2972-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.109100826s
Dec 28 13:02:40.208: INFO: Pod "pod-4ae9321b-2972-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.298201933s
STEP: Saw pod success
Dec 28 13:02:40.208: INFO: Pod "pod-4ae9321b-2972-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 13:02:40.231: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-4ae9321b-2972-11ea-8e71-0242ac110005 container test-container: 
STEP: delete the pod
Dec 28 13:02:40.551: INFO: Waiting for pod pod-4ae9321b-2972-11ea-8e71-0242ac110005 to disappear
Dec 28 13:02:40.564: INFO: Pod pod-4ae9321b-2972-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 13:02:40.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-f5xm9" for this suite.
Dec 28 13:02:48.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:02:48.805: INFO: namespace: e2e-tests-emptydir-f5xm9, resource: bindings, ignored listing per whitelist
Dec 28 13:02:48.838: INFO: namespace e2e-tests-emptydir-f5xm9 deletion completed in 8.255374909s

• [SLOW TEST:24.129 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 13:02:48.838: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-4cpbv
Dec 28 13:03:03.367: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-4cpbv
STEP: checking the pod's current state and verifying that restartCount is present
Dec 28 13:03:03.370: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 13:07:04.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-4cpbv" for this suite.
Dec 28 13:07:12.709: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:07:12.822: INFO: namespace: e2e-tests-container-probe-4cpbv, resource: bindings, ignored listing per whitelist
Dec 28 13:07:12.838: INFO: namespace e2e-tests-container-probe-4cpbv deletion completed in 8.246711888s

• [SLOW TEST:264.000 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 13:07:12.838: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 28 13:07:13.262: INFO: Waiting up to 5m0s for pod "downward-api-f6d47870-2972-11ea-8e71-0242ac110005" in namespace "e2e-tests-downward-api-bkqq4" to be "success or failure"
Dec 28 13:07:13.453: INFO: Pod "downward-api-f6d47870-2972-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 191.49381ms
Dec 28 13:07:15.546: INFO: Pod "downward-api-f6d47870-2972-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.284317039s
Dec 28 13:07:17.558: INFO: Pod "downward-api-f6d47870-2972-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.295776556s
Dec 28 13:07:19.585: INFO: Pod "downward-api-f6d47870-2972-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.323443262s
Dec 28 13:07:23.155: INFO: Pod "downward-api-f6d47870-2972-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.893471459s
Dec 28 13:07:25.188: INFO: Pod "downward-api-f6d47870-2972-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.926092884s
Dec 28 13:07:27.219: INFO: Pod "downward-api-f6d47870-2972-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.956561039s
STEP: Saw pod success
Dec 28 13:07:27.219: INFO: Pod "downward-api-f6d47870-2972-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 13:07:27.230: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-f6d47870-2972-11ea-8e71-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 28 13:07:27.401: INFO: Waiting for pod downward-api-f6d47870-2972-11ea-8e71-0242ac110005 to disappear
Dec 28 13:07:27.408: INFO: Pod downward-api-f6d47870-2972-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 13:07:27.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-bkqq4" for this suite.
Dec 28 13:07:35.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:07:35.717: INFO: namespace: e2e-tests-downward-api-bkqq4, resource: bindings, ignored listing per whitelist
Dec 28 13:07:35.819: INFO: namespace e2e-tests-downward-api-bkqq4 deletion completed in 8.383254405s

• [SLOW TEST:22.981 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 13:07:35.821: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-0487680f-2973-11ea-8e71-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 28 13:07:36.606: INFO: Waiting up to 5m0s for pod "pod-secrets-04bc8863-2973-11ea-8e71-0242ac110005" in namespace "e2e-tests-secrets-qmcwh" to be "success or failure"
Dec 28 13:07:36.626: INFO: Pod "pod-secrets-04bc8863-2973-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.192848ms
Dec 28 13:07:38.713: INFO: Pod "pod-secrets-04bc8863-2973-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106940994s
Dec 28 13:07:40.756: INFO: Pod "pod-secrets-04bc8863-2973-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.150469834s
Dec 28 13:07:42.811: INFO: Pod "pod-secrets-04bc8863-2973-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.205468282s
Dec 28 13:07:46.616: INFO: Pod "pod-secrets-04bc8863-2973-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.01041676s
Dec 28 13:07:48.628: INFO: Pod "pod-secrets-04bc8863-2973-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.022282345s
Dec 28 13:07:50.681: INFO: Pod "pod-secrets-04bc8863-2973-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.075672875s
Dec 28 13:07:52.701: INFO: Pod "pod-secrets-04bc8863-2973-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.095244233s
STEP: Saw pod success
Dec 28 13:07:52.701: INFO: Pod "pod-secrets-04bc8863-2973-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 13:07:52.710: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-04bc8863-2973-11ea-8e71-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 28 13:07:53.691: INFO: Waiting for pod pod-secrets-04bc8863-2973-11ea-8e71-0242ac110005 to disappear
Dec 28 13:07:53.708: INFO: Pod pod-secrets-04bc8863-2973-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 13:07:53.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-qmcwh" for this suite.
Dec 28 13:08:01.767: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:08:01.810: INFO: namespace: e2e-tests-secrets-qmcwh, resource: bindings, ignored listing per whitelist
Dec 28 13:08:01.992: INFO: namespace e2e-tests-secrets-qmcwh deletion completed in 8.27399008s
STEP: Destroying namespace "e2e-tests-secret-namespace-7mvdr" for this suite.
Dec 28 13:08:08.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:08:08.104: INFO: namespace: e2e-tests-secret-namespace-7mvdr, resource: bindings, ignored listing per whitelist
Dec 28 13:08:08.291: INFO: namespace e2e-tests-secret-namespace-7mvdr deletion completed in 6.298586114s

• [SLOW TEST:32.470 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 13:08:08.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-17f50b30-2973-11ea-8e71-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 28 13:08:08.945: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1801b7e0-2973-11ea-8e71-0242ac110005" in namespace "e2e-tests-projected-xwc5s" to be "success or failure"
Dec 28 13:08:09.100: INFO: Pod "pod-projected-configmaps-1801b7e0-2973-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 154.752534ms
Dec 28 13:08:11.479: INFO: Pod "pod-projected-configmaps-1801b7e0-2973-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.53435399s
Dec 28 13:08:13.497: INFO: Pod "pod-projected-configmaps-1801b7e0-2973-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.552340554s
Dec 28 13:08:15.511: INFO: Pod "pod-projected-configmaps-1801b7e0-2973-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.56616109s
Dec 28 13:08:19.088: INFO: Pod "pod-projected-configmaps-1801b7e0-2973-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.143577055s
Dec 28 13:08:21.106: INFO: Pod "pod-projected-configmaps-1801b7e0-2973-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.16139098s
Dec 28 13:08:23.124: INFO: Pod "pod-projected-configmaps-1801b7e0-2973-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.179465199s
Dec 28 13:08:25.196: INFO: Pod "pod-projected-configmaps-1801b7e0-2973-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.251320049s
Dec 28 13:08:27.239: INFO: Pod "pod-projected-configmaps-1801b7e0-2973-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.293989494s
STEP: Saw pod success
Dec 28 13:08:27.239: INFO: Pod "pod-projected-configmaps-1801b7e0-2973-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 13:08:27.289: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-1801b7e0-2973-11ea-8e71-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 28 13:08:28.977: INFO: Waiting for pod pod-projected-configmaps-1801b7e0-2973-11ea-8e71-0242ac110005 to disappear
Dec 28 13:08:29.175: INFO: Pod pod-projected-configmaps-1801b7e0-2973-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 13:08:29.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-xwc5s" for this suite.
Dec 28 13:08:35.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:08:35.375: INFO: namespace: e2e-tests-projected-xwc5s, resource: bindings, ignored listing per whitelist
Dec 28 13:08:35.414: INFO: namespace e2e-tests-projected-xwc5s deletion completed in 6.217565025s

• [SLOW TEST:27.122 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 13:08:35.414: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-27fbd1c5-2973-11ea-8e71-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 28 13:08:35.722: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-27fcc35c-2973-11ea-8e71-0242ac110005" in namespace "e2e-tests-projected-5zg74" to be "success or failure"
Dec 28 13:08:35.728: INFO: Pod "pod-projected-configmaps-27fcc35c-2973-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.218967ms
Dec 28 13:08:38.033: INFO: Pod "pod-projected-configmaps-27fcc35c-2973-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.31116218s
Dec 28 13:08:40.054: INFO: Pod "pod-projected-configmaps-27fcc35c-2973-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.332269828s
Dec 28 13:08:43.400: INFO: Pod "pod-projected-configmaps-27fcc35c-2973-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.678237853s
Dec 28 13:08:45.865: INFO: Pod "pod-projected-configmaps-27fcc35c-2973-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.143102285s
Dec 28 13:08:47.881: INFO: Pod "pod-projected-configmaps-27fcc35c-2973-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.158836343s
Dec 28 13:08:49.955: INFO: Pod "pod-projected-configmaps-27fcc35c-2973-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.233202651s
Dec 28 13:08:52.115: INFO: Pod "pod-projected-configmaps-27fcc35c-2973-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.393730179s
STEP: Saw pod success
Dec 28 13:08:52.116: INFO: Pod "pod-projected-configmaps-27fcc35c-2973-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 13:08:52.122: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-27fcc35c-2973-11ea-8e71-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 28 13:08:52.473: INFO: Waiting for pod pod-projected-configmaps-27fcc35c-2973-11ea-8e71-0242ac110005 to disappear
Dec 28 13:08:52.627: INFO: Pod pod-projected-configmaps-27fcc35c-2973-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 13:08:52.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-5zg74" for this suite.
Dec 28 13:08:58.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:08:58.803: INFO: namespace: e2e-tests-projected-5zg74, resource: bindings, ignored listing per whitelist
Dec 28 13:08:58.911: INFO: namespace e2e-tests-projected-5zg74 deletion completed in 6.26992879s

• [SLOW TEST:23.497 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 13:08:58.912: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Dec 28 13:09:29.299: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 28 13:09:29.312: INFO: Pod pod-with-poststart-http-hook still exists
Dec 28 13:09:31.312: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 28 13:09:32.698: INFO: Pod pod-with-poststart-http-hook still exists
Dec 28 13:09:33.312: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 28 13:09:33.370: INFO: Pod pod-with-poststart-http-hook still exists
Dec 28 13:09:35.312: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 28 13:09:35.840: INFO: Pod pod-with-poststart-http-hook still exists
Dec 28 13:09:37.312: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 28 13:09:37.350: INFO: Pod pod-with-poststart-http-hook still exists
Dec 28 13:09:39.312: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 28 13:09:39.341: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 13:09:39.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-mrg9s" for this suite.
Dec 28 13:10:03.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:10:03.548: INFO: namespace: e2e-tests-container-lifecycle-hook-mrg9s, resource: bindings, ignored listing per whitelist
Dec 28 13:10:03.770: INFO: namespace e2e-tests-container-lifecycle-hook-mrg9s deletion completed in 24.383530901s

• [SLOW TEST:64.858 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 13:10:03.770: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Dec 28 13:10:04.313: INFO: Number of nodes with available pods: 0
Dec 28 13:10:04.313: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 13:10:05.333: INFO: Number of nodes with available pods: 0
Dec 28 13:10:05.333: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 13:10:06.931: INFO: Number of nodes with available pods: 0
Dec 28 13:10:06.931: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 13:10:07.336: INFO: Number of nodes with available pods: 0
Dec 28 13:10:07.336: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 13:10:08.345: INFO: Number of nodes with available pods: 0
Dec 28 13:10:08.345: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 13:10:09.344: INFO: Number of nodes with available pods: 0
Dec 28 13:10:09.344: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 13:10:10.342: INFO: Number of nodes with available pods: 0
Dec 28 13:10:10.342: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 13:10:12.464: INFO: Number of nodes with available pods: 0
Dec 28 13:10:12.465: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 13:10:14.227: INFO: Number of nodes with available pods: 0
Dec 28 13:10:14.227: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 13:10:14.434: INFO: Number of nodes with available pods: 0
Dec 28 13:10:14.434: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 13:10:15.420: INFO: Number of nodes with available pods: 0
Dec 28 13:10:15.421: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 13:10:16.336: INFO: Number of nodes with available pods: 0
Dec 28 13:10:16.336: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 13:10:17.528: INFO: Number of nodes with available pods: 1
Dec 28 13:10:17.528: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Stop a daemon pod, check that the daemon pod is revived.
Dec 28 13:10:17.585: INFO: Number of nodes with available pods: 0
Dec 28 13:10:17.585: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 13:10:18.678: INFO: Number of nodes with available pods: 0
Dec 28 13:10:18.678: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 13:10:19.622: INFO: Number of nodes with available pods: 0
Dec 28 13:10:19.622: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 13:10:21.707: INFO: Number of nodes with available pods: 0
Dec 28 13:10:21.707: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 13:10:22.671: INFO: Number of nodes with available pods: 0
Dec 28 13:10:22.671: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 13:10:23.606: INFO: Number of nodes with available pods: 0
Dec 28 13:10:23.606: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 13:10:25.353: INFO: Number of nodes with available pods: 0
Dec 28 13:10:25.353: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 13:10:25.926: INFO: Number of nodes with available pods: 0
Dec 28 13:10:25.926: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 13:10:26.650: INFO: Number of nodes with available pods: 0
Dec 28 13:10:26.650: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 13:10:27.607: INFO: Number of nodes with available pods: 0
Dec 28 13:10:27.607: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 13:10:28.837: INFO: Number of nodes with available pods: 0
Dec 28 13:10:28.837: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 13:10:29.627: INFO: Number of nodes with available pods: 0
Dec 28 13:10:29.627: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 13:10:30.625: INFO: Number of nodes with available pods: 0
Dec 28 13:10:30.625: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 13:10:31.642: INFO: Number of nodes with available pods: 0
Dec 28 13:10:31.642: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 13:10:32.664: INFO: Number of nodes with available pods: 0
Dec 28 13:10:32.664: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 13:10:34.546: INFO: Number of nodes with available pods: 0
Dec 28 13:10:34.546: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 13:10:35.018: INFO: Number of nodes with available pods: 0
Dec 28 13:10:35.018: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 13:10:35.636: INFO: Number of nodes with available pods: 0
Dec 28 13:10:35.636: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 13:10:36.685: INFO: Number of nodes with available pods: 0
Dec 28 13:10:36.685: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 13:10:37.659: INFO: Number of nodes with available pods: 0
Dec 28 13:10:37.659: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 13:10:39.483: INFO: Number of nodes with available pods: 0
Dec 28 13:10:39.483: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 13:10:40.411: INFO: Number of nodes with available pods: 0
Dec 28 13:10:40.411: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 13:10:41.137: INFO: Number of nodes with available pods: 0
Dec 28 13:10:41.137: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 13:10:42.038: INFO: Number of nodes with available pods: 0
Dec 28 13:10:42.038: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 13:10:42.668: INFO: Number of nodes with available pods: 0
Dec 28 13:10:42.668: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 13:10:43.697: INFO: Number of nodes with available pods: 0
Dec 28 13:10:43.697: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 28 13:10:44.738: INFO: Number of nodes with available pods: 1
Dec 28 13:10:44.738: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-lt9m4, will wait for the garbage collector to delete the pods
Dec 28 13:10:44.852: INFO: Deleting DaemonSet.extensions daemon-set took: 34.527883ms
Dec 28 13:10:44.952: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.519714ms
Dec 28 13:11:02.748: INFO: Number of nodes with available pods: 0
Dec 28 13:11:02.748: INFO: Number of running nodes: 0, number of available pods: 0
Dec 28 13:11:02.775: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-lt9m4/daemonsets","resourceVersion":"16354821"},"items":null}

Dec 28 13:11:02.795: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-lt9m4/pods","resourceVersion":"16354821"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 13:11:02.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-lt9m4" for this suite.
Dec 28 13:11:11.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:11:11.177: INFO: namespace: e2e-tests-daemonsets-lt9m4, resource: bindings, ignored listing per whitelist
Dec 28 13:11:11.529: INFO: namespace e2e-tests-daemonsets-lt9m4 deletion completed in 8.628533932s

• [SLOW TEST:67.759 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 13:11:11.529: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-vbf5x/configmap-test-85083828-2973-11ea-8e71-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 28 13:11:12.006: INFO: Waiting up to 5m0s for pod "pod-configmaps-85098364-2973-11ea-8e71-0242ac110005" in namespace "e2e-tests-configmap-vbf5x" to be "success or failure"
Dec 28 13:11:12.074: INFO: Pod "pod-configmaps-85098364-2973-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 68.196059ms
Dec 28 13:11:14.395: INFO: Pod "pod-configmaps-85098364-2973-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.388884457s
Dec 28 13:11:16.451: INFO: Pod "pod-configmaps-85098364-2973-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.445189821s
Dec 28 13:11:18.604: INFO: Pod "pod-configmaps-85098364-2973-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.597736502s
Dec 28 13:11:20.656: INFO: Pod "pod-configmaps-85098364-2973-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.650527889s
Dec 28 13:11:22.681: INFO: Pod "pod-configmaps-85098364-2973-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.674816395s
Dec 28 13:11:24.695: INFO: Pod "pod-configmaps-85098364-2973-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.68876015s
STEP: Saw pod success
Dec 28 13:11:24.695: INFO: Pod "pod-configmaps-85098364-2973-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 13:11:24.700: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-85098364-2973-11ea-8e71-0242ac110005 container env-test: 
STEP: delete the pod
Dec 28 13:11:24.766: INFO: Waiting for pod pod-configmaps-85098364-2973-11ea-8e71-0242ac110005 to disappear
Dec 28 13:11:24.784: INFO: Pod pod-configmaps-85098364-2973-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 13:11:24.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-vbf5x" for this suite.
Dec 28 13:11:32.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:11:33.021: INFO: namespace: e2e-tests-configmap-vbf5x, resource: bindings, ignored listing per whitelist
Dec 28 13:11:33.164: INFO: namespace e2e-tests-configmap-vbf5x deletion completed in 8.370623222s

• [SLOW TEST:21.635 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 13:11:33.165: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-pmzd8
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-pmzd8 to expose endpoints map[]
Dec 28 13:11:33.515: INFO: Get endpoints failed (7.021416ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Dec 28 13:11:34.621: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-pmzd8 exposes endpoints map[] (1.11215998s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-pmzd8
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-pmzd8 to expose endpoints map[pod1:[100]]
Dec 28 13:11:39.913: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (5.218945792s elapsed, will retry)
Dec 28 13:11:46.800: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-pmzd8 exposes endpoints map[pod1:[100]] (12.106039074s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-pmzd8
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-pmzd8 to expose endpoints map[pod2:[101] pod1:[100]]
Dec 28 13:11:51.813: INFO: Unexpected endpoints: found map[92a9a1ae-2973-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (4.996167789s elapsed, will retry)
Dec 28 13:11:59.888: INFO: Unexpected endpoints: found map[92a9a1ae-2973-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (13.070954913s elapsed, will retry)
Dec 28 13:12:01.935: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-pmzd8 exposes endpoints map[pod2:[101] pod1:[100]] (15.118450203s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-pmzd8
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-pmzd8 to expose endpoints map[pod2:[101]]
Dec 28 13:12:02.086: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-pmzd8 exposes endpoints map[pod2:[101]] (129.459882ms elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-pmzd8
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-pmzd8 to expose endpoints map[]
Dec 28 13:12:03.402: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-pmzd8 exposes endpoints map[] (1.146437412s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 13:12:03.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-pmzd8" for this suite.
Dec 28 13:12:27.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:12:27.966: INFO: namespace: e2e-tests-services-pmzd8, resource: bindings, ignored listing per whitelist
Dec 28 13:12:28.113: INFO: namespace e2e-tests-services-pmzd8 deletion completed in 24.325062539s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:54.949 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 13:12:28.113: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-6h6x7
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StatefulSet
Dec 28 13:12:29.029: INFO: Found 0 stateful pods, waiting for 3
Dec 28 13:12:39.968: INFO: Found 1 stateful pods, waiting for 3
Dec 28 13:12:49.048: INFO: Found 2 stateful pods, waiting for 3
Dec 28 13:12:59.127: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 28 13:12:59.127: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 28 13:12:59.127: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 28 13:13:09.059: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 28 13:13:09.059: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 28 13:13:09.060: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Dec 28 13:13:09.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6h6x7 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 28 13:13:10.044: INFO: stderr: ""
Dec 28 13:13:10.044: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 28 13:13:10.044: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Dec 28 13:13:20.248: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Dec 28 13:13:30.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6h6x7 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 13:13:31.258: INFO: stderr: ""
Dec 28 13:13:31.258: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 28 13:13:31.258: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 28 13:13:31.398: INFO: Waiting for StatefulSet e2e-tests-statefulset-6h6x7/ss2 to complete update
Dec 28 13:13:31.399: INFO: Waiting for Pod e2e-tests-statefulset-6h6x7/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 28 13:13:31.399: INFO: Waiting for Pod e2e-tests-statefulset-6h6x7/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 28 13:13:31.399: INFO: Waiting for Pod e2e-tests-statefulset-6h6x7/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 28 13:13:41.419: INFO: Waiting for StatefulSet e2e-tests-statefulset-6h6x7/ss2 to complete update
Dec 28 13:13:41.419: INFO: Waiting for Pod e2e-tests-statefulset-6h6x7/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 28 13:13:41.419: INFO: Waiting for Pod e2e-tests-statefulset-6h6x7/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 28 13:13:51.952: INFO: Waiting for StatefulSet e2e-tests-statefulset-6h6x7/ss2 to complete update
Dec 28 13:13:51.952: INFO: Waiting for Pod e2e-tests-statefulset-6h6x7/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 28 13:13:51.952: INFO: Waiting for Pod e2e-tests-statefulset-6h6x7/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 28 13:14:01.655: INFO: Waiting for StatefulSet e2e-tests-statefulset-6h6x7/ss2 to complete update
Dec 28 13:14:01.656: INFO: Waiting for Pod e2e-tests-statefulset-6h6x7/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 28 13:14:11.442: INFO: Waiting for StatefulSet e2e-tests-statefulset-6h6x7/ss2 to complete update
Dec 28 13:14:11.442: INFO: Waiting for Pod e2e-tests-statefulset-6h6x7/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 28 13:14:21.435: INFO: Waiting for StatefulSet e2e-tests-statefulset-6h6x7/ss2 to complete update
Dec 28 13:14:21.435: INFO: Waiting for Pod e2e-tests-statefulset-6h6x7/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 28 13:14:32.051: INFO: Waiting for StatefulSet e2e-tests-statefulset-6h6x7/ss2 to complete update
STEP: Rolling back to a previous revision
Dec 28 13:14:41.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6h6x7 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 28 13:14:42.144: INFO: stderr: ""
Dec 28 13:14:42.144: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 28 13:14:42.144: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 28 13:14:52.267: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Dec 28 13:15:02.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6h6x7 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 28 13:15:03.203: INFO: stderr: ""
Dec 28 13:15:03.203: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 28 13:15:03.203: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 28 13:15:14.165: INFO: Waiting for StatefulSet e2e-tests-statefulset-6h6x7/ss2 to complete update
Dec 28 13:15:14.165: INFO: Waiting for Pod e2e-tests-statefulset-6h6x7/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 28 13:15:14.165: INFO: Waiting for Pod e2e-tests-statefulset-6h6x7/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 28 13:15:24.298: INFO: Waiting for StatefulSet e2e-tests-statefulset-6h6x7/ss2 to complete update
Dec 28 13:15:24.298: INFO: Waiting for Pod e2e-tests-statefulset-6h6x7/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 28 13:15:24.298: INFO: Waiting for Pod e2e-tests-statefulset-6h6x7/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 28 13:15:35.907: INFO: Waiting for StatefulSet e2e-tests-statefulset-6h6x7/ss2 to complete update
Dec 28 13:15:35.907: INFO: Waiting for Pod e2e-tests-statefulset-6h6x7/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 28 13:15:45.415: INFO: Waiting for StatefulSet e2e-tests-statefulset-6h6x7/ss2 to complete update
Dec 28 13:15:45.415: INFO: Waiting for Pod e2e-tests-statefulset-6h6x7/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 28 13:15:54.209: INFO: Waiting for StatefulSet e2e-tests-statefulset-6h6x7/ss2 to complete update
Dec 28 13:15:54.209: INFO: Waiting for Pod e2e-tests-statefulset-6h6x7/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 28 13:16:04.355: INFO: Waiting for StatefulSet e2e-tests-statefulset-6h6x7/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 28 13:16:14.196: INFO: Deleting all statefulset in ns e2e-tests-statefulset-6h6x7
Dec 28 13:16:14.200: INFO: Scaling statefulset ss2 to 0
Dec 28 13:16:54.286: INFO: Waiting for statefulset status.replicas updated to 0
Dec 28 13:16:54.293: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 13:16:54.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-6h6x7" for this suite.
Dec 28 13:17:04.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:17:04.624: INFO: namespace: e2e-tests-statefulset-6h6x7, resource: bindings, ignored listing per whitelist
Dec 28 13:17:04.721: INFO: namespace e2e-tests-statefulset-6h6x7 deletion completed in 10.243102322s

• [SLOW TEST:276.608 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 13:17:04.722: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 28 13:17:04.935: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5777004b-2974-11ea-8e71-0242ac110005" in namespace "e2e-tests-downward-api-frpgd" to be "success or failure"
Dec 28 13:17:04.958: INFO: Pod "downwardapi-volume-5777004b-2974-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.212494ms
Dec 28 13:17:07.328: INFO: Pod "downwardapi-volume-5777004b-2974-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.392502537s
Dec 28 13:17:09.474: INFO: Pod "downwardapi-volume-5777004b-2974-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.538224999s
Dec 28 13:17:11.490: INFO: Pod "downwardapi-volume-5777004b-2974-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.554236432s
Dec 28 13:17:13.906: INFO: Pod "downwardapi-volume-5777004b-2974-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.970743134s
Dec 28 13:17:15.938: INFO: Pod "downwardapi-volume-5777004b-2974-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.0030568s
Dec 28 13:17:17.950: INFO: Pod "downwardapi-volume-5777004b-2974-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.014759766s
STEP: Saw pod success
Dec 28 13:17:17.950: INFO: Pod "downwardapi-volume-5777004b-2974-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 13:17:17.955: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-5777004b-2974-11ea-8e71-0242ac110005 container client-container: 
STEP: delete the pod
Dec 28 13:17:18.181: INFO: Waiting for pod downwardapi-volume-5777004b-2974-11ea-8e71-0242ac110005 to disappear
Dec 28 13:17:18.188: INFO: Pod downwardapi-volume-5777004b-2974-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 13:17:18.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-frpgd" for this suite.
Dec 28 13:17:26.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:17:26.460: INFO: namespace: e2e-tests-downward-api-frpgd, resource: bindings, ignored listing per whitelist
Dec 28 13:17:26.588: INFO: namespace e2e-tests-downward-api-frpgd deletion completed in 8.386659952s

• [SLOW TEST:21.867 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 13:17:26.589: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-64940bfb-2974-11ea-8e71-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 28 13:17:26.884: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-64955805-2974-11ea-8e71-0242ac110005" in namespace "e2e-tests-projected-ss57c" to be "success or failure"
Dec 28 13:17:26.906: INFO: Pod "pod-projected-secrets-64955805-2974-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.593773ms
Dec 28 13:17:28.920: INFO: Pod "pod-projected-secrets-64955805-2974-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035573456s
Dec 28 13:17:30.929: INFO: Pod "pod-projected-secrets-64955805-2974-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044508239s
Dec 28 13:17:32.970: INFO: Pod "pod-projected-secrets-64955805-2974-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.085473144s
Dec 28 13:17:35.665: INFO: Pod "pod-projected-secrets-64955805-2974-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.781091165s
Dec 28 13:17:37.681: INFO: Pod "pod-projected-secrets-64955805-2974-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.796564876s
Dec 28 13:17:39.695: INFO: Pod "pod-projected-secrets-64955805-2974-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.810897825s
STEP: Saw pod success
Dec 28 13:17:39.695: INFO: Pod "pod-projected-secrets-64955805-2974-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 13:17:39.700: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-64955805-2974-11ea-8e71-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Dec 28 13:17:40.760: INFO: Waiting for pod pod-projected-secrets-64955805-2974-11ea-8e71-0242ac110005 to disappear
Dec 28 13:17:40.767: INFO: Pod pod-projected-secrets-64955805-2974-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 13:17:40.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-ss57c" for this suite.
Dec 28 13:17:46.902: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:17:47.185: INFO: namespace: e2e-tests-projected-ss57c, resource: bindings, ignored listing per whitelist
Dec 28 13:17:47.201: INFO: namespace e2e-tests-projected-ss57c deletion completed in 6.422860729s

• [SLOW TEST:20.613 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 13:17:47.201: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Dec 28 13:17:47.520: INFO: Waiting up to 5m0s for pod "pod-70e1d7e1-2974-11ea-8e71-0242ac110005" in namespace "e2e-tests-emptydir-lzmpz" to be "success or failure"
Dec 28 13:17:47.532: INFO: Pod "pod-70e1d7e1-2974-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.859146ms
Dec 28 13:17:49.549: INFO: Pod "pod-70e1d7e1-2974-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028946359s
Dec 28 13:17:51.557: INFO: Pod "pod-70e1d7e1-2974-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036979694s
Dec 28 13:17:53.574: INFO: Pod "pod-70e1d7e1-2974-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05382918s
Dec 28 13:17:56.259: INFO: Pod "pod-70e1d7e1-2974-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.739048394s
Dec 28 13:17:58.276: INFO: Pod "pod-70e1d7e1-2974-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.755497084s
Dec 28 13:18:00.300: INFO: Pod "pod-70e1d7e1-2974-11ea-8e71-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.779679028s
Dec 28 13:18:03.575: INFO: Pod "pod-70e1d7e1-2974-11ea-8e71-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.054745918s
STEP: Saw pod success
Dec 28 13:18:03.575: INFO: Pod "pod-70e1d7e1-2974-11ea-8e71-0242ac110005" satisfied condition "success or failure"
Dec 28 13:18:03.591: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-70e1d7e1-2974-11ea-8e71-0242ac110005 container test-container: 
STEP: delete the pod
Dec 28 13:18:04.243: INFO: Waiting for pod pod-70e1d7e1-2974-11ea-8e71-0242ac110005 to disappear
Dec 28 13:18:04.257: INFO: Pod pod-70e1d7e1-2974-11ea-8e71-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 13:18:04.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-lzmpz" for this suite.
Dec 28 13:18:12.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:18:12.690: INFO: namespace: e2e-tests-emptydir-lzmpz, resource: bindings, ignored listing per whitelist
Dec 28 13:18:12.725: INFO: namespace e2e-tests-emptydir-lzmpz deletion completed in 8.459031145s

• [SLOW TEST:25.523 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 13:18:12.726: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-l9lmg
Dec 28 13:18:27.227: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-l9lmg
STEP: checking the pod's current state and verifying that restartCount is present
Dec 28 13:18:27.232: INFO: Initial restart count of pod liveness-http is 0
Dec 28 13:18:49.679: INFO: Restart count of pod e2e-tests-container-probe-l9lmg/liveness-http is now 1 (22.446962816s elapsed)
Dec 28 13:19:08.753: INFO: Restart count of pod e2e-tests-container-probe-l9lmg/liveness-http is now 2 (41.52087951s elapsed)
Dec 28 13:19:26.934: INFO: Restart count of pod e2e-tests-container-probe-l9lmg/liveness-http is now 3 (59.701778671s elapsed)
Dec 28 13:19:47.191: INFO: Restart count of pod e2e-tests-container-probe-l9lmg/liveness-http is now 4 (1m19.958757659s elapsed)
Dec 28 13:20:58.039: INFO: Restart count of pod e2e-tests-container-probe-l9lmg/liveness-http is now 5 (2m30.807000118s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 13:20:58.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-l9lmg" for this suite.
Dec 28 13:21:04.547: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:21:04.830: INFO: namespace: e2e-tests-container-probe-l9lmg, resource: bindings, ignored listing per whitelist
Dec 28 13:21:04.856: INFO: namespace e2e-tests-container-probe-l9lmg deletion completed in 6.540083509s

• [SLOW TEST:172.130 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 28 13:21:04.857: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-qjzd2
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-qjzd2 to expose endpoints map[]
Dec 28 13:21:05.354: INFO: Get endpoints failed (6.130606ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Dec 28 13:21:06.367: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-qjzd2 exposes endpoints map[] (1.018859519s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-qjzd2
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-qjzd2 to expose endpoints map[pod1:[80]]
Dec 28 13:21:11.426: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (5.032759079s elapsed, will retry)
Dec 28 13:21:18.554: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (12.161070251s elapsed, will retry)
Dec 28 13:21:20.617: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-qjzd2 exposes endpoints map[pod1:[80]] (14.223185518s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-qjzd2
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-qjzd2 to expose endpoints map[pod1:[80] pod2:[80]]
Dec 28 13:21:25.170: INFO: Unexpected endpoints: found map[e76ca7a4-2974-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (4.537713612s elapsed, will retry)
Dec 28 13:21:30.988: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-qjzd2 exposes endpoints map[pod1:[80] pod2:[80]] (10.355432199s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-qjzd2
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-qjzd2 to expose endpoints map[pod2:[80]]
Dec 28 13:21:32.169: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-qjzd2 exposes endpoints map[pod2:[80]] (1.168622113s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-qjzd2
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-qjzd2 to expose endpoints map[]
Dec 28 13:21:34.516: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-qjzd2 exposes endpoints map[] (1.841791667s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 28 13:21:36.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-qjzd2" for this suite.
Dec 28 13:22:02.853: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 28 13:22:02.953: INFO: namespace: e2e-tests-services-qjzd2, resource: bindings, ignored listing per whitelist
Dec 28 13:22:03.001: INFO: namespace e2e-tests-services-qjzd2 deletion completed in 25.651266245s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:58.145 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSDec 28 13:22:03.002: INFO: Running AfterSuite actions on all nodes
Dec 28 13:22:03.002: INFO: Running AfterSuite actions on node 1
Dec 28 13:22:03.002: INFO: Skipping dumping logs from cluster

Ran 199 of 2164 Specs in 9298.347 seconds
SUCCESS! -- 199 Passed | 0 Failed | 0 Pending | 1965 Skipped PASS