I1218 10:47:26.467485 8 e2e.go:224] Starting e2e run "c71a598b-2183-11ea-ad77-0242ac110004" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1576666045 - Will randomize all specs Will run 201 of 2164 specs Dec 18 10:47:27.069: INFO: >>> kubeConfig: /root/.kube/config Dec 18 10:47:27.074: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Dec 18 10:47:27.096: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Dec 18 10:47:27.162: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Dec 18 10:47:27.163: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Dec 18 10:47:27.163: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Dec 18 10:47:27.175: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Dec 18 10:47:27.175: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Dec 18 10:47:27.175: INFO: e2e test version: v1.13.12 Dec 18 10:47:27.186: INFO: kube-apiserver version: v1.13.8 SS ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 10:47:27.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api Dec 18 10:47:27.631: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 18 10:47:27.655: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c87e29a7-2183-11ea-ad77-0242ac110004" in namespace "e2e-tests-downward-api-tqhjq" to be "success or failure" Dec 18 10:47:27.669: INFO: Pod "downwardapi-volume-c87e29a7-2183-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 14.117609ms Dec 18 10:47:30.467: INFO: Pod "downwardapi-volume-c87e29a7-2183-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.811843819s Dec 18 10:47:32.533: INFO: Pod "downwardapi-volume-c87e29a7-2183-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.877704521s Dec 18 10:47:34.564: INFO: Pod "downwardapi-volume-c87e29a7-2183-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.909425842s Dec 18 10:47:36.832: INFO: Pod "downwardapi-volume-c87e29a7-2183-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.176644808s Dec 18 10:47:38.858: INFO: Pod "downwardapi-volume-c87e29a7-2183-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.203302488s Dec 18 10:47:40.871: INFO: Pod "downwardapi-volume-c87e29a7-2183-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.216595102s STEP: Saw pod success Dec 18 10:47:40.872: INFO: Pod "downwardapi-volume-c87e29a7-2183-11ea-ad77-0242ac110004" satisfied condition "success or failure" Dec 18 10:47:40.876: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-c87e29a7-2183-11ea-ad77-0242ac110004 container client-container: STEP: delete the pod Dec 18 10:47:41.195: INFO: Waiting for pod downwardapi-volume-c87e29a7-2183-11ea-ad77-0242ac110004 to disappear Dec 18 10:47:41.959: INFO: Pod downwardapi-volume-c87e29a7-2183-11ea-ad77-0242ac110004 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 10:47:41.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-tqhjq" for this suite. Dec 18 10:47:48.508: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 10:47:48.600: INFO: namespace: e2e-tests-downward-api-tqhjq, resource: bindings, ignored listing per whitelist Dec 18 10:47:48.674: INFO: namespace e2e-tests-downward-api-tqhjq deletion completed in 6.69163045s • [SLOW TEST:21.488 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 10:47:48.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-d52211b3-2183-11ea-ad77-0242ac110004 STEP: Creating a pod to test consume secrets Dec 18 10:47:48.882: INFO: Waiting up to 5m0s for pod "pod-secrets-d5241d32-2183-11ea-ad77-0242ac110004" in namespace "e2e-tests-secrets-hsmbl" to be "success or failure" Dec 18 10:47:48.895: INFO: Pod "pod-secrets-d5241d32-2183-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 13.409119ms Dec 18 10:47:50.910: INFO: Pod "pod-secrets-d5241d32-2183-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028043115s Dec 18 10:47:52.933: INFO: Pod "pod-secrets-d5241d32-2183-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051328982s Dec 18 10:47:55.868: INFO: Pod "pod-secrets-d5241d32-2183-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.985703915s Dec 18 10:47:57.894: INFO: Pod "pod-secrets-d5241d32-2183-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.012138741s Dec 18 10:47:59.911: INFO: Pod "pod-secrets-d5241d32-2183-11ea-ad77-0242ac110004": Phase="Running", Reason="", readiness=true. Elapsed: 11.02926406s Dec 18 10:48:01.929: INFO: Pod "pod-secrets-d5241d32-2183-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.047469527s STEP: Saw pod success Dec 18 10:48:01.930: INFO: Pod "pod-secrets-d5241d32-2183-11ea-ad77-0242ac110004" satisfied condition "success or failure" Dec 18 10:48:01.933: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-d5241d32-2183-11ea-ad77-0242ac110004 container secret-volume-test: STEP: delete the pod Dec 18 10:48:02.607: INFO: Waiting for pod pod-secrets-d5241d32-2183-11ea-ad77-0242ac110004 to disappear Dec 18 10:48:02.951: INFO: Pod pod-secrets-d5241d32-2183-11ea-ad77-0242ac110004 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 10:48:02.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-hsmbl" for this suite. Dec 18 10:48:09.653: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 10:48:09.717: INFO: namespace: e2e-tests-secrets-hsmbl, resource: bindings, ignored listing per whitelist Dec 18 10:48:09.831: INFO: namespace e2e-tests-secrets-hsmbl deletion completed in 6.255654207s • [SLOW TEST:21.157 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 10:48:09.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-f4grl STEP: creating a selector STEP: Creating the service pods in kubernetes Dec 18 10:48:10.116: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Dec 18 10:48:52.636: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-f4grl PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 18 10:48:52.637: INFO: >>> kubeConfig: /root/.kube/config Dec 18 10:48:53.087: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 10:48:53.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-f4grl" for this suite. Dec 18 10:49:17.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 10:49:17.393: INFO: namespace: e2e-tests-pod-network-test-f4grl, resource: bindings, ignored listing per whitelist Dec 18 10:49:17.402: INFO: namespace e2e-tests-pod-network-test-f4grl deletion completed in 24.300786542s • [SLOW TEST:67.570 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 10:49:17.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command Dec 18 10:49:17.619: INFO: Waiting up to 5m0s for pod "var-expansion-0a068a7c-2184-11ea-ad77-0242ac110004" in namespace "e2e-tests-var-expansion-46fgs" to be "success or failure" Dec 18 10:49:17.636: INFO: Pod "var-expansion-0a068a7c-2184-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 16.86121ms Dec 18 10:49:19.656: INFO: Pod "var-expansion-0a068a7c-2184-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036556777s Dec 18 10:49:21.787: INFO: Pod "var-expansion-0a068a7c-2184-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.167903127s Dec 18 10:49:23.832: INFO: Pod "var-expansion-0a068a7c-2184-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.2128117s Dec 18 10:49:26.175: INFO: Pod "var-expansion-0a068a7c-2184-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.555701072s Dec 18 10:49:28.189: INFO: Pod "var-expansion-0a068a7c-2184-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.569556708s Dec 18 10:49:30.205: INFO: Pod "var-expansion-0a068a7c-2184-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.585338675s STEP: Saw pod success Dec 18 10:49:30.205: INFO: Pod "var-expansion-0a068a7c-2184-11ea-ad77-0242ac110004" satisfied condition "success or failure" Dec 18 10:49:30.209: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-0a068a7c-2184-11ea-ad77-0242ac110004 container dapi-container: STEP: delete the pod Dec 18 10:49:30.687: INFO: Waiting for pod var-expansion-0a068a7c-2184-11ea-ad77-0242ac110004 to disappear Dec 18 10:49:30.737: INFO: Pod var-expansion-0a068a7c-2184-11ea-ad77-0242ac110004 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 10:49:30.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-46fgs" for this suite. Dec 18 10:49:37.005: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 10:49:37.100: INFO: namespace: e2e-tests-var-expansion-46fgs, resource: bindings, ignored listing per whitelist Dec 18 10:49:37.184: INFO: namespace e2e-tests-var-expansion-46fgs deletion completed in 6.364449783s • [SLOW TEST:19.781 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 10:49:37.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-jqhwx Dec 18 10:49:49.548: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-jqhwx STEP: checking the pod's current state and verifying that restartCount is present Dec 18 10:49:49.556: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 10:53:51.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-jqhwx" for this suite. Dec 18 10:53:57.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 10:53:57.739: INFO: namespace: e2e-tests-container-probe-jqhwx, resource: bindings, ignored listing per whitelist Dec 18 10:53:57.902: INFO: namespace e2e-tests-container-probe-jqhwx deletion completed in 6.334561623s • [SLOW TEST:260.718 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 10:53:57.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-b130f558-2184-11ea-ad77-0242ac110004 STEP: Creating configMap with name cm-test-opt-upd-b130f5f6-2184-11ea-ad77-0242ac110004 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-b130f558-2184-11ea-ad77-0242ac110004 STEP: Updating configmap cm-test-opt-upd-b130f5f6-2184-11ea-ad77-0242ac110004 STEP: Creating configMap with name cm-test-opt-create-b130f624-2184-11ea-ad77-0242ac110004 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 10:55:33.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-7d7gv" for this suite. Dec 18 10:55:57.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 10:55:57.244: INFO: namespace: e2e-tests-projected-7d7gv, resource: bindings, ignored listing per whitelist Dec 18 10:55:57.400: INFO: namespace e2e-tests-projected-7d7gv deletion completed in 24.266417308s • [SLOW TEST:119.497 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 10:55:57.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Dec 18 10:56:06.310: INFO: 10 pods remaining Dec 18 10:56:06.310: INFO: 10 pods has nil DeletionTimestamp Dec 18 10:56:06.310: INFO: Dec 18 10:56:08.431: INFO: 8 pods remaining Dec 18 10:56:08.431: INFO: 0 pods has nil DeletionTimestamp Dec 18 10:56:08.431: INFO: Dec 18 10:56:10.633: INFO: 0 pods remaining Dec 18 10:56:10.633: INFO: 0 pods has nil DeletionTimestamp Dec 18 10:56:10.633: INFO: STEP: Gathering metrics W1218 10:56:11.083468 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 18 10:56:11.083: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 10:56:11.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-5jtqs" for this suite. Dec 18 10:56:27.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 10:56:27.188: INFO: namespace: e2e-tests-gc-5jtqs, resource: bindings, ignored listing per whitelist Dec 18 10:56:27.303: INFO: namespace e2e-tests-gc-5jtqs deletion completed in 16.216550404s • [SLOW TEST:29.903 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 10:56:27.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Dec 18 10:56:27.574: INFO: Waiting up to 5m0s for pod "pod-0a3c4afb-2185-11ea-ad77-0242ac110004" in namespace "e2e-tests-emptydir-9rmqn" to be "success or failure" Dec 18 10:56:27.591: INFO: Pod "pod-0a3c4afb-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 17.208685ms Dec 18 10:56:29.898: INFO: Pod "pod-0a3c4afb-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.324202876s Dec 18 10:56:31.938: INFO: Pod "pod-0a3c4afb-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.363835214s Dec 18 10:56:34.591: INFO: Pod "pod-0a3c4afb-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.017469819s Dec 18 10:56:36.604: INFO: Pod "pod-0a3c4afb-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.029623272s Dec 18 10:56:38.635: INFO: Pod "pod-0a3c4afb-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.061162463s Dec 18 10:56:40.846: INFO: Pod "pod-0a3c4afb-2185-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.272368094s STEP: Saw pod success Dec 18 10:56:40.847: INFO: Pod "pod-0a3c4afb-2185-11ea-ad77-0242ac110004" satisfied condition "success or failure" Dec 18 10:56:40.917: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-0a3c4afb-2185-11ea-ad77-0242ac110004 container test-container: STEP: delete the pod Dec 18 10:56:41.218: INFO: Waiting for pod pod-0a3c4afb-2185-11ea-ad77-0242ac110004 to disappear Dec 18 10:56:41.233: INFO: Pod pod-0a3c4afb-2185-11ea-ad77-0242ac110004 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 10:56:41.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-9rmqn" for this suite. Dec 18 10:56:47.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 10:56:47.408: INFO: namespace: e2e-tests-emptydir-9rmqn, resource: bindings, ignored listing per whitelist Dec 18 10:56:47.452: INFO: namespace e2e-tests-emptydir-9rmqn deletion completed in 6.207225602s • [SLOW TEST:20.148 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 10:56:47.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Dec 18 10:56:47.631: INFO: Waiting up to 5m0s for pod "pod-1644676e-2185-11ea-ad77-0242ac110004" in namespace "e2e-tests-emptydir-9tl5n" to be "success or failure" Dec 18 10:56:47.640: INFO: Pod "pod-1644676e-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.240865ms Dec 18 10:56:49.839: INFO: Pod "pod-1644676e-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207632828s Dec 18 10:56:51.896: INFO: Pod "pod-1644676e-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.265150773s Dec 18 10:56:53.917: INFO: Pod "pod-1644676e-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.285294365s Dec 18 10:56:55.955: INFO: Pod "pod-1644676e-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.32340842s Dec 18 10:56:58.387: INFO: Pod "pod-1644676e-2185-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.755877683s STEP: Saw pod success Dec 18 10:56:58.387: INFO: Pod "pod-1644676e-2185-11ea-ad77-0242ac110004" satisfied condition "success or failure" Dec 18 10:56:58.415: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-1644676e-2185-11ea-ad77-0242ac110004 container test-container: STEP: delete the pod Dec 18 10:56:59.018: INFO: Waiting for pod pod-1644676e-2185-11ea-ad77-0242ac110004 to disappear Dec 18 10:56:59.040: INFO: Pod pod-1644676e-2185-11ea-ad77-0242ac110004 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 10:56:59.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-9tl5n" for this suite. Dec 18 10:57:05.237: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 10:57:05.339: INFO: namespace: e2e-tests-emptydir-9tl5n, resource: bindings, ignored listing per whitelist Dec 18 10:57:05.473: INFO: namespace e2e-tests-emptydir-9tl5n deletion completed in 6.419339562s • [SLOW TEST:18.021 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 10:57:05.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-210bfae2-2185-11ea-ad77-0242ac110004 STEP: Creating a pod to test consume secrets Dec 18 10:57:05.780: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2113d36c-2185-11ea-ad77-0242ac110004" in namespace "e2e-tests-projected-2wz4j" to be "success or failure" Dec 18 10:57:05.896: INFO: Pod "pod-projected-secrets-2113d36c-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 116.250129ms Dec 18 10:57:08.164: INFO: Pod "pod-projected-secrets-2113d36c-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.384220961s Dec 18 10:57:10.185: INFO: Pod "pod-projected-secrets-2113d36c-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.405592954s Dec 18 10:57:12.362: INFO: Pod "pod-projected-secrets-2113d36c-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.582592626s Dec 18 10:57:14.385: INFO: Pod "pod-projected-secrets-2113d36c-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.605328663s Dec 18 10:57:16.405: INFO: Pod "pod-projected-secrets-2113d36c-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.625237239s Dec 18 10:57:18.429: INFO: Pod "pod-projected-secrets-2113d36c-2185-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.64938648s STEP: Saw pod success Dec 18 10:57:18.430: INFO: Pod "pod-projected-secrets-2113d36c-2185-11ea-ad77-0242ac110004" satisfied condition "success or failure" Dec 18 10:57:18.445: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-2113d36c-2185-11ea-ad77-0242ac110004 container projected-secret-volume-test: STEP: delete the pod Dec 18 10:57:18.679: INFO: Waiting for pod pod-projected-secrets-2113d36c-2185-11ea-ad77-0242ac110004 to disappear Dec 18 10:57:18.697: INFO: Pod pod-projected-secrets-2113d36c-2185-11ea-ad77-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 10:57:18.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2wz4j" for this suite. Dec 18 10:57:24.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 10:57:24.966: INFO: namespace: e2e-tests-projected-2wz4j, resource: bindings, ignored listing per whitelist Dec 18 10:57:25.028: INFO: namespace e2e-tests-projected-2wz4j deletion completed in 6.322650133s • [SLOW TEST:19.555 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 10:57:25.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin Dec 18 10:57:25.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-l44ct run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Dec 18 10:57:39.866: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n" Dec 18 10:57:39.866: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 10:57:41.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-l44ct" for this suite. Dec 18 10:57:54.467: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 10:57:54.608: INFO: namespace: e2e-tests-kubectl-l44ct, resource: bindings, ignored listing per whitelist Dec 18 10:57:54.726: INFO: namespace e2e-tests-kubectl-l44ct deletion completed in 12.486900871s • [SLOW TEST:29.698 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 10:57:54.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-2k5h8 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-2k5h8 STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-2k5h8 STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-2k5h8 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-2k5h8 Dec 18 10:58:09.250: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-2k5h8, name: ss-0, uid: 4420f673-2185-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete. Dec 18 10:58:12.502: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-2k5h8, name: ss-0, uid: 4420f673-2185-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete. Dec 18 10:58:12.547: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-2k5h8, name: ss-0, uid: 4420f673-2185-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete. Dec 18 10:58:12.669: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-2k5h8 STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-2k5h8 STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-2k5h8 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Dec 18 10:58:28.422: INFO: Deleting all statefulset in ns e2e-tests-statefulset-2k5h8 Dec 18 10:58:28.430: INFO: Scaling statefulset ss to 0 Dec 18 10:58:38.516: INFO: Waiting for statefulset status.replicas updated to 0 Dec 18 10:58:38.528: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 10:58:38.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-2k5h8" for this suite. Dec 18 10:58:46.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 10:58:46.796: INFO: namespace: e2e-tests-statefulset-2k5h8, resource: bindings, ignored listing per whitelist Dec 18 10:58:47.019: INFO: namespace e2e-tests-statefulset-2k5h8 deletion completed in 8.368643374s • [SLOW TEST:52.292 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 10:58:47.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 10:58:53.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-k2jzx" for this suite. Dec 18 10:58:59.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 10:59:00.228: INFO: namespace: e2e-tests-namespaces-k2jzx, resource: bindings, ignored listing per whitelist Dec 18 10:59:00.273: INFO: namespace e2e-tests-namespaces-k2jzx deletion completed in 6.407571192s STEP: Destroying namespace "e2e-tests-nsdeletetest-g56fq" for this suite. Dec 18 10:59:00.276: INFO: Namespace e2e-tests-nsdeletetest-g56fq was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-qqmv8" for this suite. Dec 18 10:59:06.305: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 10:59:06.377: INFO: namespace: e2e-tests-nsdeletetest-qqmv8, resource: bindings, ignored listing per whitelist Dec 18 10:59:06.465: INFO: namespace e2e-tests-nsdeletetest-qqmv8 deletion completed in 6.188559053s • [SLOW TEST:19.445 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 10:59:06.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-fn6hq/configmap-test-69319817-2185-11ea-ad77-0242ac110004 STEP: Creating a pod to test consume configMaps Dec 18 10:59:06.772: INFO: Waiting up to 5m0s for pod "pod-configmaps-6933a955-2185-11ea-ad77-0242ac110004" in namespace "e2e-tests-configmap-fn6hq" to be "success or failure" Dec 18 10:59:06.796: INFO: Pod "pod-configmaps-6933a955-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 23.639531ms Dec 18 10:59:08.814: INFO: Pod "pod-configmaps-6933a955-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041584862s Dec 18 10:59:10.848: INFO: Pod "pod-configmaps-6933a955-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076014537s Dec 18 10:59:12.871: INFO: Pod "pod-configmaps-6933a955-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.099276768s Dec 18 10:59:14.888: INFO: Pod "pod-configmaps-6933a955-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.115670971s Dec 18 10:59:16.934: INFO: Pod "pod-configmaps-6933a955-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.162185789s Dec 18 10:59:18.979: INFO: Pod "pod-configmaps-6933a955-2185-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.207045498s STEP: Saw pod success Dec 18 10:59:18.979: INFO: Pod "pod-configmaps-6933a955-2185-11ea-ad77-0242ac110004" satisfied condition "success or failure" Dec 18 10:59:18.986: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-6933a955-2185-11ea-ad77-0242ac110004 container env-test: STEP: delete the pod Dec 18 10:59:19.212: INFO: Waiting for pod pod-configmaps-6933a955-2185-11ea-ad77-0242ac110004 to disappear Dec 18 10:59:19.225: INFO: Pod pod-configmaps-6933a955-2185-11ea-ad77-0242ac110004 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 10:59:19.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-fn6hq" for this suite. Dec 18 10:59:25.380: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 10:59:25.541: INFO: namespace: e2e-tests-configmap-fn6hq, resource: bindings, ignored listing per whitelist Dec 18 10:59:25.546: INFO: namespace e2e-tests-configmap-fn6hq deletion completed in 6.210818235s • [SLOW TEST:19.081 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 10:59:25.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 18 10:59:25.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-xbpwk' Dec 18 10:59:26.148: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 18 10:59:26.148: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 Dec 18 10:59:28.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-xbpwk' Dec 18 10:59:28.858: INFO: stderr: "" Dec 18 10:59:28.859: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 10:59:28.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-xbpwk" for this suite. Dec 18 10:59:34.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 10:59:35.027: INFO: namespace: e2e-tests-kubectl-xbpwk, resource: bindings, ignored listing per whitelist Dec 18 10:59:35.096: INFO: namespace e2e-tests-kubectl-xbpwk deletion completed in 6.218923646s • [SLOW TEST:9.550 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 10:59:35.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-7a364ec1-2185-11ea-ad77-0242ac110004 STEP: Creating a pod to test consume configMaps Dec 18 10:59:35.326: INFO: Waiting up to 5m0s for pod "pod-configmaps-7a37f8dd-2185-11ea-ad77-0242ac110004" in namespace "e2e-tests-configmap-pnd86" to be "success or failure" Dec 18 10:59:35.366: INFO: Pod "pod-configmaps-7a37f8dd-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 38.981798ms Dec 18 10:59:37.399: INFO: Pod "pod-configmaps-7a37f8dd-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072828031s Dec 18 10:59:39.413: INFO: Pod "pod-configmaps-7a37f8dd-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08607922s Dec 18 10:59:42.448: INFO: Pod "pod-configmaps-7a37f8dd-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.121767243s Dec 18 10:59:44.482: INFO: Pod "pod-configmaps-7a37f8dd-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.155518521s Dec 18 10:59:46.515: INFO: Pod "pod-configmaps-7a37f8dd-2185-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.188627158s STEP: Saw pod success Dec 18 10:59:46.515: INFO: Pod "pod-configmaps-7a37f8dd-2185-11ea-ad77-0242ac110004" satisfied condition "success or failure" Dec 18 10:59:46.553: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-7a37f8dd-2185-11ea-ad77-0242ac110004 container configmap-volume-test: STEP: delete the pod Dec 18 10:59:46.851: INFO: Waiting for pod pod-configmaps-7a37f8dd-2185-11ea-ad77-0242ac110004 to disappear Dec 18 10:59:46.889: INFO: Pod pod-configmaps-7a37f8dd-2185-11ea-ad77-0242ac110004 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 10:59:46.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-pnd86" for this suite. Dec 18 10:59:53.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 10:59:53.256: INFO: namespace: e2e-tests-configmap-pnd86, resource: bindings, ignored listing per whitelist Dec 18 10:59:53.345: INFO: namespace e2e-tests-configmap-pnd86 deletion completed in 6.353625922s • [SLOW TEST:18.249 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 10:59:53.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 18 10:59:53.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-ng25z' Dec 18 10:59:53.743: INFO: stderr: "" Dec 18 10:59:53.743: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Dec 18 11:00:08.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-ng25z -o json' Dec 18 11:00:08.918: INFO: stderr: "" Dec 18 11:00:08.918: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-12-18T10:59:53Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-ng25z\",\n \"resourceVersion\": \"15222576\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-ng25z/pods/e2e-test-nginx-pod\",\n \"uid\": \"852ea644-2185-11ea-a994-fa163e34d433\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-rvdvv\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-rvdvv\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-rvdvv\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-12-18T10:59:53Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-12-18T11:00:03Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-12-18T11:00:03Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-12-18T10:59:53Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://980f63bec2737c8053cb1244df603197f4e10abfb25315facb48a05a4ba692d3\",\n \"image\": \"nginx:1.14-alpine\",\n \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2019-12-18T11:00:02Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.96.1.240\",\n \"phase\": \"Running\",\n \"podIP\": \"10.32.0.4\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2019-12-18T10:59:53Z\"\n }\n}\n" STEP: replace the image in the pod Dec 18 11:00:08.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-ng25z' Dec 18 11:00:09.517: INFO: stderr: "" Dec 18 11:00:09.517: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 Dec 18 11:00:09.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-ng25z' Dec 18 11:00:18.235: INFO: stderr: "" Dec 18 11:00:18.235: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 11:00:18.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ng25z" for this suite. Dec 18 11:00:24.355: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 11:00:24.409: INFO: namespace: e2e-tests-kubectl-ng25z, resource: bindings, ignored listing per whitelist Dec 18 11:00:24.518: INFO: namespace e2e-tests-kubectl-ng25z deletion completed in 6.198053835s • [SLOW TEST:31.173 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 11:00:24.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Dec 18 11:00:24.904: INFO: Waiting up to 5m0s for pod "pod-97c35849-2185-11ea-ad77-0242ac110004" in namespace "e2e-tests-emptydir-mjfxn" to be "success or failure" Dec 18 11:00:25.023: INFO: Pod "pod-97c35849-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 119.354785ms Dec 18 11:00:27.309: INFO: Pod "pod-97c35849-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.40521917s Dec 18 11:00:29.330: INFO: Pod "pod-97c35849-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.425792458s Dec 18 11:00:31.447: INFO: Pod "pod-97c35849-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.543244006s Dec 18 11:00:33.461: INFO: Pod "pod-97c35849-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.557334272s Dec 18 11:00:35.637: INFO: Pod "pod-97c35849-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.732784185s Dec 18 11:00:37.656: INFO: Pod "pod-97c35849-2185-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.751770429s STEP: Saw pod success Dec 18 11:00:37.656: INFO: Pod "pod-97c35849-2185-11ea-ad77-0242ac110004" satisfied condition "success or failure" Dec 18 11:00:37.675: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-97c35849-2185-11ea-ad77-0242ac110004 container test-container: STEP: delete the pod Dec 18 11:00:38.634: INFO: Waiting for pod pod-97c35849-2185-11ea-ad77-0242ac110004 to disappear Dec 18 11:00:38.659: INFO: Pod pod-97c35849-2185-11ea-ad77-0242ac110004 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 11:00:38.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-mjfxn" for this suite. Dec 18 11:00:44.763: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 11:00:44.848: INFO: namespace: e2e-tests-emptydir-mjfxn, resource: bindings, ignored listing per whitelist Dec 18 11:00:44.976: INFO: namespace e2e-tests-emptydir-mjfxn deletion completed in 6.303016565s • [SLOW TEST:20.458 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 11:00:44.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-a3f48f81-2185-11ea-ad77-0242ac110004 STEP: Creating a pod to test consume secrets Dec 18 11:00:45.360: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a3f703de-2185-11ea-ad77-0242ac110004" in namespace "e2e-tests-projected-pdw6g" to be "success or failure" Dec 18 11:00:45.394: INFO: Pod "pod-projected-secrets-a3f703de-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 33.005715ms Dec 18 11:00:47.596: INFO: Pod "pod-projected-secrets-a3f703de-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.235179776s Dec 18 11:00:49.612: INFO: Pod "pod-projected-secrets-a3f703de-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.250944945s Dec 18 11:00:52.307: INFO: Pod "pod-projected-secrets-a3f703de-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.946791531s Dec 18 11:00:54.324: INFO: Pod "pod-projected-secrets-a3f703de-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.963635184s Dec 18 11:00:56.357: INFO: Pod "pod-projected-secrets-a3f703de-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.995897275s Dec 18 11:00:58.383: INFO: Pod "pod-projected-secrets-a3f703de-2185-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.022363431s STEP: Saw pod success Dec 18 11:00:58.383: INFO: Pod "pod-projected-secrets-a3f703de-2185-11ea-ad77-0242ac110004" satisfied condition "success or failure" Dec 18 11:00:58.399: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-a3f703de-2185-11ea-ad77-0242ac110004 container projected-secret-volume-test: STEP: delete the pod Dec 18 11:00:58.775: INFO: Waiting for pod pod-projected-secrets-a3f703de-2185-11ea-ad77-0242ac110004 to disappear Dec 18 11:00:58.790: INFO: Pod pod-projected-secrets-a3f703de-2185-11ea-ad77-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 11:00:58.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-pdw6g" for this suite. Dec 18 11:01:04.821: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 11:01:04.901: INFO: namespace: e2e-tests-projected-pdw6g, resource: bindings, ignored listing per whitelist Dec 18 11:01:05.005: INFO: namespace e2e-tests-projected-pdw6g deletion completed in 6.206985393s • [SLOW TEST:20.028 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 11:01:05.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-aff926a8-2185-11ea-ad77-0242ac110004 STEP: Creating a pod to test consume configMaps Dec 18 11:01:05.704: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-afff3a72-2185-11ea-ad77-0242ac110004" in namespace "e2e-tests-projected-7t9cd" to be "success or failure" Dec 18 11:01:05.716: INFO: Pod "pod-projected-configmaps-afff3a72-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 12.091747ms Dec 18 11:01:08.036: INFO: Pod "pod-projected-configmaps-afff3a72-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.331672016s Dec 18 11:01:10.051: INFO: Pod "pod-projected-configmaps-afff3a72-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.346688549s Dec 18 11:01:12.064: INFO: Pod "pod-projected-configmaps-afff3a72-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.359962891s Dec 18 11:01:14.089: INFO: Pod "pod-projected-configmaps-afff3a72-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.385281712s Dec 18 11:01:16.223: INFO: Pod "pod-projected-configmaps-afff3a72-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.518572961s Dec 18 11:01:18.330: INFO: Pod "pod-projected-configmaps-afff3a72-2185-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.626062224s STEP: Saw pod success Dec 18 11:01:18.330: INFO: Pod "pod-projected-configmaps-afff3a72-2185-11ea-ad77-0242ac110004" satisfied condition "success or failure" Dec 18 11:01:18.532: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-afff3a72-2185-11ea-ad77-0242ac110004 container projected-configmap-volume-test: STEP: delete the pod Dec 18 11:01:18.749: INFO: Waiting for pod pod-projected-configmaps-afff3a72-2185-11ea-ad77-0242ac110004 to disappear Dec 18 11:01:18.772: INFO: Pod pod-projected-configmaps-afff3a72-2185-11ea-ad77-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 11:01:18.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-7t9cd" for this suite. Dec 18 11:01:26.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 11:01:26.925: INFO: namespace: e2e-tests-projected-7t9cd, resource: bindings, ignored listing per whitelist Dec 18 11:01:27.007: INFO: namespace e2e-tests-projected-7t9cd deletion completed in 8.223652041s • [SLOW TEST:22.001 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 11:01:27.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller Dec 18 11:01:27.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-h8jzd' Dec 18 11:01:27.932: INFO: stderr: "" Dec 18 11:01:27.932: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 18 11:01:27.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-h8jzd' Dec 18 11:01:28.266: INFO: stderr: "" Dec 18 11:01:28.266: INFO: stdout: "update-demo-nautilus-ks96l update-demo-nautilus-w5jmj " Dec 18 11:01:28.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ks96l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h8jzd' Dec 18 11:01:28.428: INFO: stderr: "" Dec 18 11:01:28.428: INFO: stdout: "" Dec 18 11:01:28.428: INFO: update-demo-nautilus-ks96l is created but not running Dec 18 11:01:33.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-h8jzd' Dec 18 11:01:35.560: INFO: stderr: "" Dec 18 11:01:35.560: INFO: stdout: "update-demo-nautilus-ks96l update-demo-nautilus-w5jmj " Dec 18 11:01:35.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ks96l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h8jzd' Dec 18 11:01:35.901: INFO: stderr: "" Dec 18 11:01:35.902: INFO: stdout: "" Dec 18 11:01:35.902: INFO: update-demo-nautilus-ks96l is created but not running Dec 18 11:01:40.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-h8jzd' Dec 18 11:01:41.226: INFO: stderr: "" Dec 18 11:01:41.226: INFO: stdout: "update-demo-nautilus-ks96l update-demo-nautilus-w5jmj " Dec 18 11:01:41.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ks96l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h8jzd' Dec 18 11:01:41.368: INFO: stderr: "" Dec 18 11:01:41.368: INFO: stdout: "true" Dec 18 11:01:41.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ks96l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h8jzd' Dec 18 11:01:41.526: INFO: stderr: "" Dec 18 11:01:41.527: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 18 11:01:41.527: INFO: validating pod update-demo-nautilus-ks96l Dec 18 11:01:41.651: INFO: got data: { "image": "nautilus.jpg" } Dec 18 11:01:41.652: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 18 11:01:41.652: INFO: update-demo-nautilus-ks96l is verified up and running Dec 18 11:01:41.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w5jmj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h8jzd' Dec 18 11:01:41.836: INFO: stderr: "" Dec 18 11:01:41.836: INFO: stdout: "true" Dec 18 11:01:41.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w5jmj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h8jzd' Dec 18 11:01:41.948: INFO: stderr: "" Dec 18 11:01:41.948: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 18 11:01:41.948: INFO: validating pod update-demo-nautilus-w5jmj Dec 18 11:01:41.962: INFO: got data: { "image": "nautilus.jpg" } Dec 18 11:01:41.962: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 18 11:01:41.962: INFO: update-demo-nautilus-w5jmj is verified up and running STEP: rolling-update to new replication controller Dec 18 11:01:41.965: INFO: scanned /root for discovery docs: Dec 18 11:01:41.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-h8jzd' Dec 18 11:02:19.178: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Dec 18 11:02:19.178: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 18 11:02:19.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-h8jzd' Dec 18 11:02:19.420: INFO: stderr: "" Dec 18 11:02:19.420: INFO: stdout: "update-demo-kitten-2bh95 update-demo-kitten-6q8gb update-demo-nautilus-w5jmj " STEP: Replicas for name=update-demo: expected=2 actual=3 Dec 18 11:02:24.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-h8jzd' Dec 18 11:02:24.605: INFO: stderr: "" Dec 18 11:02:24.605: INFO: stdout: "update-demo-kitten-2bh95 update-demo-kitten-6q8gb " Dec 18 11:02:24.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-2bh95 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h8jzd' Dec 18 11:02:24.759: INFO: stderr: "" Dec 18 11:02:24.760: INFO: stdout: "true" Dec 18 11:02:24.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-2bh95 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h8jzd' Dec 18 11:02:24.934: INFO: stderr: "" Dec 18 11:02:24.934: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Dec 18 11:02:24.934: INFO: validating pod update-demo-kitten-2bh95 Dec 18 11:02:24.965: INFO: got data: { "image": "kitten.jpg" } Dec 18 11:02:24.965: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Dec 18 11:02:24.965: INFO: update-demo-kitten-2bh95 is verified up and running Dec 18 11:02:24.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-6q8gb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h8jzd' Dec 18 11:02:25.107: INFO: stderr: "" Dec 18 11:02:25.107: INFO: stdout: "true" Dec 18 11:02:25.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-6q8gb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h8jzd' Dec 18 11:02:25.207: INFO: stderr: "" Dec 18 11:02:25.207: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Dec 18 11:02:25.207: INFO: validating pod update-demo-kitten-6q8gb Dec 18 11:02:25.222: INFO: got data: { "image": "kitten.jpg" } Dec 18 11:02:25.222: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Dec 18 11:02:25.222: INFO: update-demo-kitten-6q8gb is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 11:02:25.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-h8jzd" for this suite. Dec 18 11:02:49.278: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 11:02:49.350: INFO: namespace: e2e-tests-kubectl-h8jzd, resource: bindings, ignored listing per whitelist Dec 18 11:02:49.459: INFO: namespace e2e-tests-kubectl-h8jzd deletion completed in 24.230763088s • [SLOW TEST:82.452 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 11:02:49.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-ee148766-2185-11ea-ad77-0242ac110004 STEP: Creating a pod to test consume configMaps Dec 18 11:02:49.831: INFO: Waiting up to 5m0s for pod "pod-configmaps-ee25e9fe-2185-11ea-ad77-0242ac110004" in namespace "e2e-tests-configmap-hhv8j" to be "success or failure" Dec 18 11:02:49.871: INFO: Pod "pod-configmaps-ee25e9fe-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 39.349677ms Dec 18 11:02:51.905: INFO: Pod "pod-configmaps-ee25e9fe-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073775141s Dec 18 11:02:53.929: INFO: Pod "pod-configmaps-ee25e9fe-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097925874s Dec 18 11:02:56.973: INFO: Pod "pod-configmaps-ee25e9fe-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.14150974s Dec 18 11:02:59.067: INFO: Pod "pod-configmaps-ee25e9fe-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.235938257s Dec 18 11:03:01.096: INFO: Pod "pod-configmaps-ee25e9fe-2185-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.264886106s STEP: Saw pod success Dec 18 11:03:01.096: INFO: Pod "pod-configmaps-ee25e9fe-2185-11ea-ad77-0242ac110004" satisfied condition "success or failure" Dec 18 11:03:01.105: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-ee25e9fe-2185-11ea-ad77-0242ac110004 container configmap-volume-test: STEP: delete the pod Dec 18 11:03:01.265: INFO: Waiting for pod pod-configmaps-ee25e9fe-2185-11ea-ad77-0242ac110004 to disappear Dec 18 11:03:01.276: INFO: Pod pod-configmaps-ee25e9fe-2185-11ea-ad77-0242ac110004 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 11:03:01.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-hhv8j" for this suite. Dec 18 11:03:07.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 11:03:07.558: INFO: namespace: e2e-tests-configmap-hhv8j, resource: bindings, ignored listing per whitelist Dec 18 11:03:07.660: INFO: namespace e2e-tests-configmap-hhv8j deletion completed in 6.366804528s • [SLOW TEST:18.200 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 11:03:07.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-b9mw5/secret-test-f913377c-2185-11ea-ad77-0242ac110004 STEP: Creating a pod to test consume secrets Dec 18 11:03:08.192: INFO: Waiting up to 5m0s for pod "pod-configmaps-f9147ef7-2185-11ea-ad77-0242ac110004" in namespace "e2e-tests-secrets-b9mw5" to be "success or failure" Dec 18 11:03:08.210: INFO: Pod "pod-configmaps-f9147ef7-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 18.402978ms Dec 18 11:03:10.400: INFO: Pod "pod-configmaps-f9147ef7-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207727658s Dec 18 11:03:12.416: INFO: Pod "pod-configmaps-f9147ef7-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.223882104s Dec 18 11:03:14.941: INFO: Pod "pod-configmaps-f9147ef7-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.748942821s Dec 18 11:03:16.956: INFO: Pod "pod-configmaps-f9147ef7-2185-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.763918682s Dec 18 11:03:18.976: INFO: Pod "pod-configmaps-f9147ef7-2185-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.784181206s STEP: Saw pod success Dec 18 11:03:18.976: INFO: Pod "pod-configmaps-f9147ef7-2185-11ea-ad77-0242ac110004" satisfied condition "success or failure" Dec 18 11:03:18.993: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-f9147ef7-2185-11ea-ad77-0242ac110004 container env-test: STEP: delete the pod Dec 18 11:03:19.155: INFO: Waiting for pod pod-configmaps-f9147ef7-2185-11ea-ad77-0242ac110004 to disappear Dec 18 11:03:19.246: INFO: Pod pod-configmaps-f9147ef7-2185-11ea-ad77-0242ac110004 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 11:03:19.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-b9mw5" for this suite. Dec 18 11:03:25.279: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 11:03:25.377: INFO: namespace: e2e-tests-secrets-b9mw5, resource: bindings, ignored listing per whitelist Dec 18 11:03:25.477: INFO: namespace e2e-tests-secrets-b9mw5 deletion completed in 6.220560013s • [SLOW TEST:17.817 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 11:03:25.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 18 11:03:25.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-j7wkh' Dec 18 11:03:25.858: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 18 11:03:25.858: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Dec 18 11:03:25.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-j7wkh' Dec 18 11:03:26.220: INFO: stderr: "" Dec 18 11:03:26.220: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 11:03:26.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-j7wkh" for this suite. Dec 18 11:03:50.369: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 11:03:50.442: INFO: namespace: e2e-tests-kubectl-j7wkh, resource: bindings, ignored listing per whitelist Dec 18 11:03:50.612: INFO: namespace e2e-tests-kubectl-j7wkh deletion completed in 24.304538052s • [SLOW TEST:25.136 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 11:03:50.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-mhgr8/configmap-test-127743da-2186-11ea-ad77-0242ac110004 STEP: Creating a pod to test consume configMaps Dec 18 11:03:50.807: INFO: Waiting up to 5m0s for pod "pod-configmaps-1277da12-2186-11ea-ad77-0242ac110004" in namespace "e2e-tests-configmap-mhgr8" to be "success or failure" Dec 18 11:03:50.820: INFO: Pod "pod-configmaps-1277da12-2186-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 13.081702ms Dec 18 11:03:52.838: INFO: Pod "pod-configmaps-1277da12-2186-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030153774s Dec 18 11:03:54.857: INFO: Pod "pod-configmaps-1277da12-2186-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049188769s Dec 18 11:03:57.709: INFO: Pod "pod-configmaps-1277da12-2186-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.90165083s Dec 18 11:03:59.984: INFO: Pod "pod-configmaps-1277da12-2186-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.17656304s Dec 18 11:04:02.008: INFO: Pod "pod-configmaps-1277da12-2186-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.200828062s Dec 18 11:04:04.021: INFO: Pod "pod-configmaps-1277da12-2186-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.213324784s STEP: Saw pod success Dec 18 11:04:04.021: INFO: Pod "pod-configmaps-1277da12-2186-11ea-ad77-0242ac110004" satisfied condition "success or failure" Dec 18 11:04:04.025: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-1277da12-2186-11ea-ad77-0242ac110004 container env-test: STEP: delete the pod Dec 18 11:04:04.151: INFO: Waiting for pod pod-configmaps-1277da12-2186-11ea-ad77-0242ac110004 to disappear Dec 18 11:04:04.168: INFO: Pod pod-configmaps-1277da12-2186-11ea-ad77-0242ac110004 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 11:04:04.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-mhgr8" for this suite. Dec 18 11:04:10.320: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 11:04:10.429: INFO: namespace: e2e-tests-configmap-mhgr8, resource: bindings, ignored listing per whitelist Dec 18 11:04:10.615: INFO: namespace e2e-tests-configmap-mhgr8 deletion completed in 6.435210101s • [SLOW TEST:20.002 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 11:04:10.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-7d75 STEP: Creating a pod to test atomic-volume-subpath Dec 18 11:04:10.847: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-7d75" in namespace "e2e-tests-subpath-x7gv5" to be "success or failure" Dec 18 11:04:10.869: INFO: Pod "pod-subpath-test-configmap-7d75": Phase="Pending", Reason="", readiness=false. Elapsed: 21.98977ms Dec 18 11:04:12.989: INFO: Pod "pod-subpath-test-configmap-7d75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.141473204s Dec 18 11:04:15.023: INFO: Pod "pod-subpath-test-configmap-7d75": Phase="Pending", Reason="", readiness=false. Elapsed: 4.175277121s Dec 18 11:04:17.402: INFO: Pod "pod-subpath-test-configmap-7d75": Phase="Pending", Reason="", readiness=false. Elapsed: 6.554219798s Dec 18 11:04:19.428: INFO: Pod "pod-subpath-test-configmap-7d75": Phase="Pending", Reason="", readiness=false. Elapsed: 8.580891715s Dec 18 11:04:21.440: INFO: Pod "pod-subpath-test-configmap-7d75": Phase="Pending", Reason="", readiness=false. Elapsed: 10.592303926s Dec 18 11:04:23.457: INFO: Pod "pod-subpath-test-configmap-7d75": Phase="Pending", Reason="", readiness=false. Elapsed: 12.609979674s Dec 18 11:04:25.509: INFO: Pod "pod-subpath-test-configmap-7d75": Phase="Pending", Reason="", readiness=false. Elapsed: 14.661270697s Dec 18 11:04:27.707: INFO: Pod "pod-subpath-test-configmap-7d75": Phase="Pending", Reason="", readiness=false. Elapsed: 16.859101666s Dec 18 11:04:29.726: INFO: Pod "pod-subpath-test-configmap-7d75": Phase="Running", Reason="", readiness=false. Elapsed: 18.879056543s Dec 18 11:04:31.743: INFO: Pod "pod-subpath-test-configmap-7d75": Phase="Running", Reason="", readiness=false. Elapsed: 20.895531598s Dec 18 11:04:33.801: INFO: Pod "pod-subpath-test-configmap-7d75": Phase="Running", Reason="", readiness=false. Elapsed: 22.953933749s Dec 18 11:04:35.839: INFO: Pod "pod-subpath-test-configmap-7d75": Phase="Running", Reason="", readiness=false. Elapsed: 24.992031404s Dec 18 11:04:37.918: INFO: Pod "pod-subpath-test-configmap-7d75": Phase="Running", Reason="", readiness=false. Elapsed: 27.070397628s Dec 18 11:04:39.936: INFO: Pod "pod-subpath-test-configmap-7d75": Phase="Running", Reason="", readiness=false. Elapsed: 29.088782626s Dec 18 11:04:41.957: INFO: Pod "pod-subpath-test-configmap-7d75": Phase="Running", Reason="", readiness=false. Elapsed: 31.109812724s Dec 18 11:04:43.994: INFO: Pod "pod-subpath-test-configmap-7d75": Phase="Running", Reason="", readiness=false. Elapsed: 33.146840091s Dec 18 11:04:46.013: INFO: Pod "pod-subpath-test-configmap-7d75": Phase="Running", Reason="", readiness=false. Elapsed: 35.165535373s Dec 18 11:04:48.184: INFO: Pod "pod-subpath-test-configmap-7d75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.336442676s STEP: Saw pod success Dec 18 11:04:48.184: INFO: Pod "pod-subpath-test-configmap-7d75" satisfied condition "success or failure" Dec 18 11:04:48.597: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-7d75 container test-container-subpath-configmap-7d75: STEP: delete the pod Dec 18 11:04:48.922: INFO: Waiting for pod pod-subpath-test-configmap-7d75 to disappear Dec 18 11:04:48.947: INFO: Pod pod-subpath-test-configmap-7d75 no longer exists STEP: Deleting pod pod-subpath-test-configmap-7d75 Dec 18 11:04:48.947: INFO: Deleting pod "pod-subpath-test-configmap-7d75" in namespace "e2e-tests-subpath-x7gv5" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 11:04:48.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-x7gv5" for this suite. Dec 18 11:04:55.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 11:04:55.128: INFO: namespace: e2e-tests-subpath-x7gv5, resource: bindings, ignored listing per whitelist Dec 18 11:04:55.257: INFO: namespace e2e-tests-subpath-x7gv5 deletion completed in 6.292636667s • [SLOW TEST:44.641 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 11:04:55.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 11:05:55.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-4h78c" for this suite. Dec 18 11:06:19.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 11:06:19.931: INFO: namespace: e2e-tests-container-probe-4h78c, resource: bindings, ignored listing per whitelist Dec 18 11:06:20.121: INFO: namespace e2e-tests-container-probe-4h78c deletion completed in 24.567575312s • [SLOW TEST:84.865 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 11:06:20.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 18 11:06:20.466: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 Dec 18 11:06:20.499: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-jfxbz/daemonsets","resourceVersion":"15223411"},"items":null} Dec 18 11:06:20.513: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-jfxbz/pods","resourceVersion":"15223411"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 11:06:20.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-jfxbz" for this suite. Dec 18 11:06:26.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 11:06:26.836: INFO: namespace: e2e-tests-daemonsets-jfxbz, resource: bindings, ignored listing per whitelist Dec 18 11:06:26.990: INFO: namespace e2e-tests-daemonsets-jfxbz deletion completed in 6.365208907s S [SKIPPING] [6.869 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 18 11:06:20.466: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 11:06:26.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 18 11:06:27.216: INFO: Creating deployment "test-recreate-deployment" Dec 18 11:06:27.226: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Dec 18 11:06:27.244: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created Dec 18 11:06:29.295: INFO: Waiting deployment "test-recreate-deployment" to complete Dec 18 11:06:29.335: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712263987, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712263987, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712263987, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712263987, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 18 11:06:31.455: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712263987, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712263987, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712263987, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712263987, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 18 11:06:34.391: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712263987, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712263987, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712263987, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712263987, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 18 11:06:35.360: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712263987, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712263987, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712263987, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712263987, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 18 11:06:37.354: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712263987, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712263987, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712263987, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712263987, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 18 11:06:39.355: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712263987, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712263987, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712263987, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712263987, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 18 11:06:41.362: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Dec 18 11:06:41.391: INFO: Updating deployment test-recreate-deployment Dec 18 11:06:41.391: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Dec 18 11:06:43.206: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-t6mkh,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-t6mkh/deployments/test-recreate-deployment,UID:6fbc6ffe-2186-11ea-a994-fa163e34d433,ResourceVersion:15223483,Generation:2,CreationTimestamp:2019-12-18 11:06:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2019-12-18 11:06:41 +0000 UTC 2019-12-18 11:06:41 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-12-18 11:06:42 +0000 UTC 2019-12-18 11:06:27 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Dec 18 11:06:43.243: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-t6mkh,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-t6mkh/replicasets/test-recreate-deployment-589c4bfd,UID:7868f1b2-2186-11ea-a994-fa163e34d433,ResourceVersion:15223480,Generation:1,CreationTimestamp:2019-12-18 11:06:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 6fbc6ffe-2186-11ea-a994-fa163e34d433 0xc00264bcef 0xc00264bd00}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 18 11:06:43.243: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Dec 18 11:06:43.244: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-t6mkh,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-t6mkh/replicasets/test-recreate-deployment-5bf7f65dc,UID:6fc0c6e1-2186-11ea-a994-fa163e34d433,ResourceVersion:15223472,Generation:2,CreationTimestamp:2019-12-18 11:06:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 6fbc6ffe-2186-11ea-a994-fa163e34d433 0xc00264bdc0 0xc00264bdc1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 18 11:06:43.747: INFO: Pod "test-recreate-deployment-589c4bfd-b4pgx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-b4pgx,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-t6mkh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t6mkh/pods/test-recreate-deployment-589c4bfd-b4pgx,UID:786f27f7-2186-11ea-a994-fa163e34d433,ResourceVersion:15223484,Generation:0,CreationTimestamp:2019-12-18 11:06:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 7868f1b2-2186-11ea-a994-fa163e34d433 0xc00146d03f 0xc00146d050}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-tbpfn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tbpfn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-tbpfn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00146d120} {node.kubernetes.io/unreachable Exists NoExecute 0xc00146d140}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 11:06:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 11:06:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 11:06:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 11:06:41 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-18 11:06:42 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 11:06:43.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-t6mkh" for this suite. Dec 18 11:06:59.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 11:06:59.088: INFO: namespace: e2e-tests-deployment-t6mkh, resource: bindings, ignored listing per whitelist Dec 18 11:06:59.163: INFO: namespace e2e-tests-deployment-t6mkh deletion completed in 15.396300201s • [SLOW TEST:32.172 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 11:06:59.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-82dc0739-2186-11ea-ad77-0242ac110004 STEP: Creating a pod to test consume secrets Dec 18 11:06:59.319: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-82dc8dc1-2186-11ea-ad77-0242ac110004" in namespace "e2e-tests-projected-vmmmg" to be "success or failure" Dec 18 11:06:59.456: INFO: Pod "pod-projected-secrets-82dc8dc1-2186-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 136.348109ms Dec 18 11:07:01.857: INFO: Pod "pod-projected-secrets-82dc8dc1-2186-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.53807112s Dec 18 11:07:03.894: INFO: Pod "pod-projected-secrets-82dc8dc1-2186-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.574912811s Dec 18 11:07:06.283: INFO: Pod "pod-projected-secrets-82dc8dc1-2186-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.963911698s Dec 18 11:07:08.324: INFO: Pod "pod-projected-secrets-82dc8dc1-2186-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.004555312s Dec 18 11:07:10.370: INFO: Pod "pod-projected-secrets-82dc8dc1-2186-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.050155633s Dec 18 11:07:13.391: INFO: Pod "pod-projected-secrets-82dc8dc1-2186-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.072036601s STEP: Saw pod success Dec 18 11:07:13.392: INFO: Pod "pod-projected-secrets-82dc8dc1-2186-11ea-ad77-0242ac110004" satisfied condition "success or failure" Dec 18 11:07:13.598: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-82dc8dc1-2186-11ea-ad77-0242ac110004 container secret-volume-test: STEP: delete the pod Dec 18 11:07:13.713: INFO: Waiting for pod pod-projected-secrets-82dc8dc1-2186-11ea-ad77-0242ac110004 to disappear Dec 18 11:07:13.718: INFO: Pod pod-projected-secrets-82dc8dc1-2186-11ea-ad77-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 11:07:13.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vmmmg" for this suite. Dec 18 11:07:19.771: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 11:07:19.913: INFO: namespace: e2e-tests-projected-vmmmg, resource: bindings, ignored listing per whitelist Dec 18 11:07:19.968: INFO: namespace e2e-tests-projected-vmmmg deletion completed in 6.24385325s • [SLOW TEST:20.805 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 11:07:19.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-8f493cef-2186-11ea-ad77-0242ac110004 STEP: Creating a pod to test consume secrets Dec 18 11:07:20.186: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8f4c837f-2186-11ea-ad77-0242ac110004" in namespace "e2e-tests-projected-brrp6" to be "success or failure" Dec 18 11:07:20.279: INFO: Pod "pod-projected-secrets-8f4c837f-2186-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 93.080014ms Dec 18 11:07:22.713: INFO: Pod "pod-projected-secrets-8f4c837f-2186-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.526807883s Dec 18 11:07:24.725: INFO: Pod "pod-projected-secrets-8f4c837f-2186-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.539262939s Dec 18 11:07:27.003: INFO: Pod "pod-projected-secrets-8f4c837f-2186-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.816605299s Dec 18 11:07:29.011: INFO: Pod "pod-projected-secrets-8f4c837f-2186-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.825368528s Dec 18 11:07:31.034: INFO: Pod "pod-projected-secrets-8f4c837f-2186-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.848262722s STEP: Saw pod success Dec 18 11:07:31.034: INFO: Pod "pod-projected-secrets-8f4c837f-2186-11ea-ad77-0242ac110004" satisfied condition "success or failure" Dec 18 11:07:31.040: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-8f4c837f-2186-11ea-ad77-0242ac110004 container projected-secret-volume-test: STEP: delete the pod Dec 18 11:07:31.160: INFO: Waiting for pod pod-projected-secrets-8f4c837f-2186-11ea-ad77-0242ac110004 to disappear Dec 18 11:07:31.231: INFO: Pod pod-projected-secrets-8f4c837f-2186-11ea-ad77-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 11:07:31.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-brrp6" for this suite. Dec 18 11:07:39.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 11:07:39.418: INFO: namespace: e2e-tests-projected-brrp6, resource: bindings, ignored listing per whitelist Dec 18 11:07:39.478: INFO: namespace e2e-tests-projected-brrp6 deletion completed in 8.237712716s • [SLOW TEST:19.510 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 11:07:39.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-9b03f316-2186-11ea-ad77-0242ac110004 STEP: Creating a pod to test consume configMaps Dec 18 11:07:39.901: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9b09937d-2186-11ea-ad77-0242ac110004" in namespace "e2e-tests-projected-bkqj2" to be "success or failure" Dec 18 11:07:39.919: INFO: Pod "pod-projected-configmaps-9b09937d-2186-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 17.423281ms Dec 18 11:07:42.401: INFO: Pod "pod-projected-configmaps-9b09937d-2186-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.499792171s Dec 18 11:07:44.425: INFO: Pod "pod-projected-configmaps-9b09937d-2186-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.524237333s Dec 18 11:07:47.411: INFO: Pod "pod-projected-configmaps-9b09937d-2186-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.510178744s Dec 18 11:07:49.607: INFO: Pod "pod-projected-configmaps-9b09937d-2186-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.706169058s Dec 18 11:07:51.622: INFO: Pod "pod-projected-configmaps-9b09937d-2186-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.721112204s Dec 18 11:07:53.646: INFO: Pod "pod-projected-configmaps-9b09937d-2186-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.744595776s STEP: Saw pod success Dec 18 11:07:53.646: INFO: Pod "pod-projected-configmaps-9b09937d-2186-11ea-ad77-0242ac110004" satisfied condition "success or failure" Dec 18 11:07:53.659: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-9b09937d-2186-11ea-ad77-0242ac110004 container projected-configmap-volume-test: STEP: delete the pod Dec 18 11:07:55.169: INFO: Waiting for pod pod-projected-configmaps-9b09937d-2186-11ea-ad77-0242ac110004 to disappear Dec 18 11:07:55.200: INFO: Pod pod-projected-configmaps-9b09937d-2186-11ea-ad77-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 11:07:55.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bkqj2" for this suite. Dec 18 11:08:01.345: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 11:08:01.381: INFO: namespace: e2e-tests-projected-bkqj2, resource: bindings, ignored listing per whitelist Dec 18 11:08:01.594: INFO: namespace e2e-tests-projected-bkqj2 deletion completed in 6.350309846s • [SLOW TEST:22.116 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 11:08:01.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 18 11:08:02.026: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Dec 18 11:08:02.047: INFO: Number of nodes with available pods: 0 Dec 18 11:08:02.047: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Dec 18 11:08:02.162: INFO: Number of nodes with available pods: 0 Dec 18 11:08:02.163: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:08:03.176: INFO: Number of nodes with available pods: 0 Dec 18 11:08:03.176: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:08:04.608: INFO: Number of nodes with available pods: 0 Dec 18 11:08:04.608: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:08:05.183: INFO: Number of nodes with available pods: 0 Dec 18 11:08:05.183: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:08:06.179: INFO: Number of nodes with available pods: 0 Dec 18 11:08:06.179: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:08:07.193: INFO: Number of nodes with available pods: 0 Dec 18 11:08:07.193: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:08:10.232: INFO: Number of nodes with available pods: 0 Dec 18 11:08:10.232: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:08:11.973: INFO: Number of nodes with available pods: 0 Dec 18 11:08:11.973: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:08:12.793: INFO: Number of nodes with available pods: 0 Dec 18 11:08:12.793: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:08:13.177: INFO: Number of nodes with available pods: 0 Dec 18 11:08:13.177: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:08:14.176: INFO: Number of nodes with available pods: 1 Dec 18 11:08:14.176: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Dec 18 11:08:14.244: INFO: Number of nodes with available pods: 1 Dec 18 11:08:14.244: INFO: Number of running nodes: 0, number of available pods: 1 Dec 18 11:08:15.265: INFO: Number of nodes with available pods: 0 Dec 18 11:08:15.265: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Dec 18 11:08:15.311: INFO: Number of nodes with available pods: 0 Dec 18 11:08:15.311: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:08:16.330: INFO: Number of nodes with available pods: 0 Dec 18 11:08:16.330: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:08:17.798: INFO: Number of nodes with available pods: 0 Dec 18 11:08:17.798: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:08:18.339: INFO: Number of nodes with available pods: 0 Dec 18 11:08:18.339: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:08:19.334: INFO: Number of nodes with available pods: 0 Dec 18 11:08:19.335: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:08:20.332: INFO: Number of nodes with available pods: 0 Dec 18 11:08:20.332: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:08:21.326: INFO: Number of nodes with available pods: 0 Dec 18 11:08:21.326: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:08:22.330: INFO: Number of nodes with available pods: 0 Dec 18 11:08:22.331: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:08:23.330: INFO: Number of nodes with available pods: 0 Dec 18 11:08:23.330: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:08:24.317: INFO: Number of nodes with available pods: 0 Dec 18 11:08:24.317: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:08:25.337: INFO: Number of nodes with available pods: 0 Dec 18 11:08:25.337: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:08:26.329: INFO: Number of nodes with available pods: 0 Dec 18 11:08:26.329: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:08:27.325: INFO: Number of nodes with available pods: 0 Dec 18 11:08:27.326: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:08:28.329: INFO: Number of nodes with available pods: 0 Dec 18 11:08:28.329: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:08:29.337: INFO: Number of nodes with available pods: 0 Dec 18 11:08:29.337: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:08:30.333: INFO: Number of nodes with available pods: 0 Dec 18 11:08:30.333: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:08:31.325: INFO: Number of nodes with available pods: 0 Dec 18 11:08:31.325: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:08:32.329: INFO: Number of nodes with available pods: 0 Dec 18 11:08:32.329: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:08:33.348: INFO: Number of nodes with available pods: 0 Dec 18 11:08:33.349: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:08:34.334: INFO: Number of nodes with available pods: 0 Dec 18 11:08:34.334: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:08:35.788: INFO: Number of nodes with available pods: 0 Dec 18 11:08:35.788: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:08:36.325: INFO: Number of nodes with available pods: 0 Dec 18 11:08:36.325: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:08:37.416: INFO: Number of nodes with available pods: 0 Dec 18 11:08:37.416: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:08:39.265: INFO: Number of nodes with available pods: 0 Dec 18 11:08:39.266: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:08:39.717: INFO: Number of nodes with available pods: 0 Dec 18 11:08:39.717: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:08:40.329: INFO: Number of nodes with available pods: 0 Dec 18 11:08:40.329: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:08:41.401: INFO: Number of nodes with available pods: 0 Dec 18 11:08:41.401: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:08:42.364: INFO: Number of nodes with available pods: 0 Dec 18 11:08:42.365: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:08:43.333: INFO: Number of nodes with available pods: 0 Dec 18 11:08:43.334: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:08:44.325: INFO: Number of nodes with available pods: 1 Dec 18 11:08:44.325: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-zqqj8, will wait for the garbage collector to delete the pods Dec 18 11:08:44.409: INFO: Deleting DaemonSet.extensions daemon-set took: 17.386876ms Dec 18 11:08:44.509: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.650686ms Dec 18 11:09:02.715: INFO: Number of nodes with available pods: 0 Dec 18 11:09:02.716: INFO: Number of running nodes: 0, number of available pods: 0 Dec 18 11:09:02.720: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-zqqj8/daemonsets","resourceVersion":"15223794"},"items":null} Dec 18 11:09:02.722: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-zqqj8/pods","resourceVersion":"15223794"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 11:09:02.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-zqqj8" for this suite. Dec 18 11:09:10.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 11:09:11.034: INFO: namespace: e2e-tests-daemonsets-zqqj8, resource: bindings, ignored listing per whitelist Dec 18 11:09:11.056: INFO: namespace e2e-tests-daemonsets-zqqj8 deletion completed in 8.243417767s • [SLOW TEST:69.461 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 11:09:11.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-kwwv9 Dec 18 11:09:23.505: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-kwwv9 STEP: checking the pod's current state and verifying that restartCount is present Dec 18 11:09:23.520: INFO: Initial restart count of pod liveness-http is 0 Dec 18 11:09:42.368: INFO: Restart count of pod e2e-tests-container-probe-kwwv9/liveness-http is now 1 (18.84824682s elapsed) Dec 18 11:10:02.712: INFO: Restart count of pod e2e-tests-container-probe-kwwv9/liveness-http is now 2 (39.19197422s elapsed) Dec 18 11:10:23.678: INFO: Restart count of pod e2e-tests-container-probe-kwwv9/liveness-http is now 3 (1m0.158758946s elapsed) Dec 18 11:10:44.058: INFO: Restart count of pod e2e-tests-container-probe-kwwv9/liveness-http is now 4 (1m20.537983432s elapsed) Dec 18 11:11:52.921: INFO: Restart count of pod e2e-tests-container-probe-kwwv9/liveness-http is now 5 (2m29.401232949s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 11:11:52.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-kwwv9" for this suite. Dec 18 11:12:01.125: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 11:12:01.153: INFO: namespace: e2e-tests-container-probe-kwwv9, resource: bindings, ignored listing per whitelist Dec 18 11:12:01.285: INFO: namespace e2e-tests-container-probe-kwwv9 deletion completed in 8.286085093s • [SLOW TEST:170.229 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 11:12:01.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-36fb9fa7-2187-11ea-ad77-0242ac110004 STEP: Creating a pod to test consume configMaps Dec 18 11:12:01.598: INFO: Waiting up to 5m0s for pod "pod-configmaps-36fedad1-2187-11ea-ad77-0242ac110004" in namespace "e2e-tests-configmap-vrhn2" to be "success or failure" Dec 18 11:12:01.623: INFO: Pod "pod-configmaps-36fedad1-2187-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 24.662428ms Dec 18 11:12:03.646: INFO: Pod "pod-configmaps-36fedad1-2187-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047138108s Dec 18 11:12:05.661: INFO: Pod "pod-configmaps-36fedad1-2187-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06259065s Dec 18 11:12:07.973: INFO: Pod "pod-configmaps-36fedad1-2187-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.374120006s Dec 18 11:12:09.993: INFO: Pod "pod-configmaps-36fedad1-2187-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.394661887s Dec 18 11:12:12.014: INFO: Pod "pod-configmaps-36fedad1-2187-11ea-ad77-0242ac110004": Phase="Running", Reason="", readiness=true. Elapsed: 10.415666453s Dec 18 11:12:14.319: INFO: Pod "pod-configmaps-36fedad1-2187-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.720850568s STEP: Saw pod success Dec 18 11:12:14.320: INFO: Pod "pod-configmaps-36fedad1-2187-11ea-ad77-0242ac110004" satisfied condition "success or failure" Dec 18 11:12:14.342: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-36fedad1-2187-11ea-ad77-0242ac110004 container configmap-volume-test: STEP: delete the pod Dec 18 11:12:14.830: INFO: Waiting for pod pod-configmaps-36fedad1-2187-11ea-ad77-0242ac110004 to disappear Dec 18 11:12:14.919: INFO: Pod pod-configmaps-36fedad1-2187-11ea-ad77-0242ac110004 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 11:12:14.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-vrhn2" for this suite. Dec 18 11:12:21.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 11:12:21.081: INFO: namespace: e2e-tests-configmap-vrhn2, resource: bindings, ignored listing per whitelist Dec 18 11:12:21.195: INFO: namespace e2e-tests-configmap-vrhn2 deletion completed in 6.260556835s • [SLOW TEST:19.911 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 11:12:21.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Dec 18 11:12:45.606: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 18 11:12:45.637: INFO: Pod pod-with-prestop-exec-hook still exists Dec 18 11:12:47.638: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 18 11:12:47.668: INFO: Pod pod-with-prestop-exec-hook still exists Dec 18 11:12:49.638: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 18 11:12:49.656: INFO: Pod pod-with-prestop-exec-hook still exists Dec 18 11:12:51.638: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 18 11:12:51.663: INFO: Pod pod-with-prestop-exec-hook still exists Dec 18 11:12:53.638: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 18 11:12:53.652: INFO: Pod pod-with-prestop-exec-hook still exists Dec 18 11:12:55.638: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 18 11:12:55.668: INFO: Pod pod-with-prestop-exec-hook still exists Dec 18 11:12:57.638: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 18 11:12:57.665: INFO: Pod pod-with-prestop-exec-hook still exists Dec 18 11:12:59.638: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 18 11:12:59.655: INFO: Pod pod-with-prestop-exec-hook still exists Dec 18 11:13:01.638: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 18 11:13:01.757: INFO: Pod pod-with-prestop-exec-hook still exists Dec 18 11:13:03.637: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 18 11:13:03.659: INFO: Pod pod-with-prestop-exec-hook still exists Dec 18 11:13:05.637: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 18 11:13:05.656: INFO: Pod pod-with-prestop-exec-hook still exists Dec 18 11:13:07.637: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 18 11:13:07.651: INFO: Pod pod-with-prestop-exec-hook still exists Dec 18 11:13:09.638: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 18 11:13:09.658: INFO: Pod pod-with-prestop-exec-hook still exists Dec 18 11:13:11.638: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 18 11:13:11.650: INFO: Pod pod-with-prestop-exec-hook still exists Dec 18 11:13:13.637: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 18 11:13:13.650: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 11:13:13.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-r972b" for this suite. Dec 18 11:13:37.826: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 11:13:37.971: INFO: namespace: e2e-tests-container-lifecycle-hook-r972b, resource: bindings, ignored listing per whitelist Dec 18 11:13:37.992: INFO: namespace e2e-tests-container-lifecycle-hook-r972b deletion completed in 24.296550216s • [SLOW TEST:76.796 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 11:13:37.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-m2q6 STEP: Creating a pod to test atomic-volume-subpath Dec 18 11:13:38.322: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-m2q6" in namespace "e2e-tests-subpath-ktjvs" to be "success or failure" Dec 18 11:13:38.339: INFO: Pod "pod-subpath-test-configmap-m2q6": Phase="Pending", Reason="", readiness=false. Elapsed: 17.150277ms Dec 18 11:13:40.726: INFO: Pod "pod-subpath-test-configmap-m2q6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.403779502s Dec 18 11:13:42.748: INFO: Pod "pod-subpath-test-configmap-m2q6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.426530251s Dec 18 11:13:44.884: INFO: Pod "pod-subpath-test-configmap-m2q6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.561948676s Dec 18 11:13:46.930: INFO: Pod "pod-subpath-test-configmap-m2q6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.607688048s Dec 18 11:13:49.045: INFO: Pod "pod-subpath-test-configmap-m2q6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.722980112s Dec 18 11:13:51.178: INFO: Pod "pod-subpath-test-configmap-m2q6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.856443584s Dec 18 11:13:53.207: INFO: Pod "pod-subpath-test-configmap-m2q6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.885230263s Dec 18 11:13:55.219: INFO: Pod "pod-subpath-test-configmap-m2q6": Phase="Pending", Reason="", readiness=false. Elapsed: 16.896956345s Dec 18 11:13:57.243: INFO: Pod "pod-subpath-test-configmap-m2q6": Phase="Running", Reason="", readiness=false. Elapsed: 18.921235038s Dec 18 11:13:59.530: INFO: Pod "pod-subpath-test-configmap-m2q6": Phase="Running", Reason="", readiness=false. Elapsed: 21.20803119s Dec 18 11:14:01.549: INFO: Pod "pod-subpath-test-configmap-m2q6": Phase="Running", Reason="", readiness=false. Elapsed: 23.22728538s Dec 18 11:14:03.570: INFO: Pod "pod-subpath-test-configmap-m2q6": Phase="Running", Reason="", readiness=false. Elapsed: 25.248168185s Dec 18 11:14:05.588: INFO: Pod "pod-subpath-test-configmap-m2q6": Phase="Running", Reason="", readiness=false. Elapsed: 27.26638525s Dec 18 11:14:07.609: INFO: Pod "pod-subpath-test-configmap-m2q6": Phase="Running", Reason="", readiness=false. Elapsed: 29.286861144s Dec 18 11:14:09.635: INFO: Pod "pod-subpath-test-configmap-m2q6": Phase="Running", Reason="", readiness=false. Elapsed: 31.312745049s Dec 18 11:14:11.665: INFO: Pod "pod-subpath-test-configmap-m2q6": Phase="Running", Reason="", readiness=false. Elapsed: 33.34322121s Dec 18 11:14:13.685: INFO: Pod "pod-subpath-test-configmap-m2q6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.36289789s STEP: Saw pod success Dec 18 11:14:13.685: INFO: Pod "pod-subpath-test-configmap-m2q6" satisfied condition "success or failure" Dec 18 11:14:13.695: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-m2q6 container test-container-subpath-configmap-m2q6: STEP: delete the pod Dec 18 11:14:13.919: INFO: Waiting for pod pod-subpath-test-configmap-m2q6 to disappear Dec 18 11:14:13.935: INFO: Pod pod-subpath-test-configmap-m2q6 no longer exists STEP: Deleting pod pod-subpath-test-configmap-m2q6 Dec 18 11:14:13.935: INFO: Deleting pod "pod-subpath-test-configmap-m2q6" in namespace "e2e-tests-subpath-ktjvs" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 11:14:14.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-ktjvs" for this suite. Dec 18 11:14:22.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 11:14:22.398: INFO: namespace: e2e-tests-subpath-ktjvs, resource: bindings, ignored listing per whitelist Dec 18 11:14:22.403: INFO: namespace e2e-tests-subpath-ktjvs deletion completed in 8.308178067s • [SLOW TEST:44.410 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 11:14:22.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode Dec 18 11:14:23.034: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-bh587" to be "success or failure" Dec 18 11:14:23.049: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 15.300514ms Dec 18 11:14:25.597: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.563643329s Dec 18 11:14:27.619: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.585706451s Dec 18 11:14:29.684: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.649951712s Dec 18 11:14:32.703: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 9.669314414s Dec 18 11:14:34.724: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 11.689993247s Dec 18 11:14:36.756: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.721841031s STEP: Saw pod success Dec 18 11:14:36.756: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Dec 18 11:14:36.765: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: STEP: delete the pod Dec 18 11:14:37.028: INFO: Waiting for pod pod-host-path-test to disappear Dec 18 11:14:37.042: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 11:14:37.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-bh587" for this suite. Dec 18 11:14:43.226: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 11:14:43.384: INFO: namespace: e2e-tests-hostpath-bh587, resource: bindings, ignored listing per whitelist Dec 18 11:14:43.425: INFO: namespace e2e-tests-hostpath-bh587 deletion completed in 6.275814501s • [SLOW TEST:21.022 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 11:14:43.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 11:14:57.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-nvpk7" for this suite. Dec 18 11:15:21.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 11:15:21.208: INFO: namespace: e2e-tests-replication-controller-nvpk7, resource: bindings, ignored listing per whitelist Dec 18 11:15:21.328: INFO: namespace e2e-tests-replication-controller-nvpk7 deletion completed in 24.281182815s • [SLOW TEST:37.903 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 11:15:21.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Dec 18 11:15:21.599: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-kmpdt,SelfLink:/api/v1/namespaces/e2e-tests-watch-kmpdt/configmaps/e2e-watch-test-configmap-a,UID:ae3f93e7-2187-11ea-a994-fa163e34d433,ResourceVersion:15224479,Generation:0,CreationTimestamp:2019-12-18 11:15:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 18 11:15:21.599: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-kmpdt,SelfLink:/api/v1/namespaces/e2e-tests-watch-kmpdt/configmaps/e2e-watch-test-configmap-a,UID:ae3f93e7-2187-11ea-a994-fa163e34d433,ResourceVersion:15224479,Generation:0,CreationTimestamp:2019-12-18 11:15:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Dec 18 11:15:31.630: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-kmpdt,SelfLink:/api/v1/namespaces/e2e-tests-watch-kmpdt/configmaps/e2e-watch-test-configmap-a,UID:ae3f93e7-2187-11ea-a994-fa163e34d433,ResourceVersion:15224492,Generation:0,CreationTimestamp:2019-12-18 11:15:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Dec 18 11:15:31.631: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-kmpdt,SelfLink:/api/v1/namespaces/e2e-tests-watch-kmpdt/configmaps/e2e-watch-test-configmap-a,UID:ae3f93e7-2187-11ea-a994-fa163e34d433,ResourceVersion:15224492,Generation:0,CreationTimestamp:2019-12-18 11:15:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Dec 18 11:15:41.660: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-kmpdt,SelfLink:/api/v1/namespaces/e2e-tests-watch-kmpdt/configmaps/e2e-watch-test-configmap-a,UID:ae3f93e7-2187-11ea-a994-fa163e34d433,ResourceVersion:15224505,Generation:0,CreationTimestamp:2019-12-18 11:15:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 18 11:15:41.661: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-kmpdt,SelfLink:/api/v1/namespaces/e2e-tests-watch-kmpdt/configmaps/e2e-watch-test-configmap-a,UID:ae3f93e7-2187-11ea-a994-fa163e34d433,ResourceVersion:15224505,Generation:0,CreationTimestamp:2019-12-18 11:15:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Dec 18 11:15:51.688: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-kmpdt,SelfLink:/api/v1/namespaces/e2e-tests-watch-kmpdt/configmaps/e2e-watch-test-configmap-a,UID:ae3f93e7-2187-11ea-a994-fa163e34d433,ResourceVersion:15224518,Generation:0,CreationTimestamp:2019-12-18 11:15:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 18 11:15:51.688: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-kmpdt,SelfLink:/api/v1/namespaces/e2e-tests-watch-kmpdt/configmaps/e2e-watch-test-configmap-a,UID:ae3f93e7-2187-11ea-a994-fa163e34d433,ResourceVersion:15224518,Generation:0,CreationTimestamp:2019-12-18 11:15:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Dec 18 11:16:01.818: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-kmpdt,SelfLink:/api/v1/namespaces/e2e-tests-watch-kmpdt/configmaps/e2e-watch-test-configmap-b,UID:c627186a-2187-11ea-a994-fa163e34d433,ResourceVersion:15224531,Generation:0,CreationTimestamp:2019-12-18 11:16:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 18 11:16:01.818: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-kmpdt,SelfLink:/api/v1/namespaces/e2e-tests-watch-kmpdt/configmaps/e2e-watch-test-configmap-b,UID:c627186a-2187-11ea-a994-fa163e34d433,ResourceVersion:15224531,Generation:0,CreationTimestamp:2019-12-18 11:16:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Dec 18 11:16:11.856: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-kmpdt,SelfLink:/api/v1/namespaces/e2e-tests-watch-kmpdt/configmaps/e2e-watch-test-configmap-b,UID:c627186a-2187-11ea-a994-fa163e34d433,ResourceVersion:15224543,Generation:0,CreationTimestamp:2019-12-18 11:16:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 18 11:16:11.857: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-kmpdt,SelfLink:/api/v1/namespaces/e2e-tests-watch-kmpdt/configmaps/e2e-watch-test-configmap-b,UID:c627186a-2187-11ea-a994-fa163e34d433,ResourceVersion:15224543,Generation:0,CreationTimestamp:2019-12-18 11:16:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 11:16:21.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-kmpdt" for this suite. Dec 18 11:16:27.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 11:16:28.061: INFO: namespace: e2e-tests-watch-kmpdt, resource: bindings, ignored listing per whitelist Dec 18 11:16:28.097: INFO: namespace e2e-tests-watch-kmpdt deletion completed in 6.211964645s • [SLOW TEST:66.768 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 11:16:28.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Dec 18 11:16:38.434: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-d60b4780-2187-11ea-ad77-0242ac110004,GenerateName:,Namespace:e2e-tests-events-pqhfq,SelfLink:/api/v1/namespaces/e2e-tests-events-pqhfq/pods/send-events-d60b4780-2187-11ea-ad77-0242ac110004,UID:d60c2998-2187-11ea-a994-fa163e34d433,ResourceVersion:15224591,Generation:0,CreationTimestamp:2019-12-18 11:16:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 358655913,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-httms {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-httms,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-httms true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00099c870} {node.kubernetes.io/unreachable Exists NoExecute 0xc00099c890}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 11:16:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 11:16:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 11:16:38 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 11:16:28 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2019-12-18 11:16:28 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2019-12-18 11:16:36 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://c0932d74cf8ead521187cead632549163bc4b936928d0b1628ac9a0df6b01495}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Dec 18 11:16:40.450: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Dec 18 11:16:42.472: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 11:16:42.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-pqhfq" for this suite. Dec 18 11:17:24.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 11:17:24.783: INFO: namespace: e2e-tests-events-pqhfq, resource: bindings, ignored listing per whitelist Dec 18 11:17:24.891: INFO: namespace e2e-tests-events-pqhfq deletion completed in 42.271943833s • [SLOW TEST:56.793 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 11:17:24.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Dec 18 11:17:25.200: INFO: Waiting up to 5m0s for pod "pod-f7ea1f79-2187-11ea-ad77-0242ac110004" in namespace "e2e-tests-emptydir-hsklz" to be "success or failure" Dec 18 11:17:25.212: INFO: Pod "pod-f7ea1f79-2187-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.988213ms Dec 18 11:17:27.505: INFO: Pod "pod-f7ea1f79-2187-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.305145809s Dec 18 11:17:29.534: INFO: Pod "pod-f7ea1f79-2187-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.333517527s Dec 18 11:17:31.913: INFO: Pod "pod-f7ea1f79-2187-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.712837339s Dec 18 11:17:33.927: INFO: Pod "pod-f7ea1f79-2187-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.7267118s Dec 18 11:17:36.178: INFO: Pod "pod-f7ea1f79-2187-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.978125231s Dec 18 11:17:38.549: INFO: Pod "pod-f7ea1f79-2187-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.34859559s STEP: Saw pod success Dec 18 11:17:38.549: INFO: Pod "pod-f7ea1f79-2187-11ea-ad77-0242ac110004" satisfied condition "success or failure" Dec 18 11:17:39.133: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-f7ea1f79-2187-11ea-ad77-0242ac110004 container test-container: STEP: delete the pod Dec 18 11:17:39.271: INFO: Waiting for pod pod-f7ea1f79-2187-11ea-ad77-0242ac110004 to disappear Dec 18 11:17:39.282: INFO: Pod pod-f7ea1f79-2187-11ea-ad77-0242ac110004 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 11:17:39.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-hsklz" for this suite. Dec 18 11:17:45.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 11:17:45.347: INFO: namespace: e2e-tests-emptydir-hsklz, resource: bindings, ignored listing per whitelist Dec 18 11:17:45.475: INFO: namespace e2e-tests-emptydir-hsklz deletion completed in 6.180881857s • [SLOW TEST:20.583 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 11:17:45.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-041c66fc-2188-11ea-ad77-0242ac110004 STEP: Creating secret with name s-test-opt-upd-041c68d1-2188-11ea-ad77-0242ac110004 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-041c66fc-2188-11ea-ad77-0242ac110004 STEP: Updating secret s-test-opt-upd-041c68d1-2188-11ea-ad77-0242ac110004 STEP: Creating secret with name s-test-opt-create-041c6934-2188-11ea-ad77-0242ac110004 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 11:19:08.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-znnp9" for this suite. Dec 18 11:19:34.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 11:19:34.165: INFO: namespace: e2e-tests-secrets-znnp9, resource: bindings, ignored listing per whitelist Dec 18 11:19:34.254: INFO: namespace e2e-tests-secrets-znnp9 deletion completed in 26.172852299s • [SLOW TEST:108.778 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 11:19:34.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Dec 18 11:19:34.610: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Dec 18 11:19:34.623: INFO: Waiting for terminating namespaces to be deleted... Dec 18 11:19:34.627: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test Dec 18 11:19:34.642: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Dec 18 11:19:34.642: INFO: Container coredns ready: true, restart count 0 Dec 18 11:19:34.642: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded) Dec 18 11:19:34.642: INFO: Container kube-proxy ready: true, restart count 0 Dec 18 11:19:34.642: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Dec 18 11:19:34.642: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded) Dec 18 11:19:34.642: INFO: Container weave ready: true, restart count 0 Dec 18 11:19:34.642: INFO: Container weave-npc ready: true, restart count 0 Dec 18 11:19:34.642: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Dec 18 11:19:34.642: INFO: Container coredns ready: true, restart count 0 Dec 18 11:19:34.642: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Dec 18 11:19:34.642: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Dec 18 11:19:34.642: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15e1732b5f4960ed], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 11:19:35.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-lxkkj" for this suite. Dec 18 11:19:41.793: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 11:19:41.970: INFO: namespace: e2e-tests-sched-pred-lxkkj, resource: bindings, ignored listing per whitelist Dec 18 11:19:41.992: INFO: namespace e2e-tests-sched-pred-lxkkj deletion completed in 6.284536874s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.737 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 11:19:41.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 11:19:54.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-5bsd9" for this suite. Dec 18 11:20:01.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 11:20:01.261: INFO: namespace: e2e-tests-kubelet-test-5bsd9, resource: bindings, ignored listing per whitelist Dec 18 11:20:01.311: INFO: namespace e2e-tests-kubelet-test-5bsd9 deletion completed in 6.527227006s • [SLOW TEST:19.318 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 11:20:01.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Dec 18 11:20:01.740: INFO: Number of nodes with available pods: 0 Dec 18 11:20:01.740: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:20:02.769: INFO: Number of nodes with available pods: 0 Dec 18 11:20:02.769: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:20:04.405: INFO: Number of nodes with available pods: 0 Dec 18 11:20:04.405: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:20:04.755: INFO: Number of nodes with available pods: 0 Dec 18 11:20:04.755: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:20:05.774: INFO: Number of nodes with available pods: 0 Dec 18 11:20:05.775: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:20:06.779: INFO: Number of nodes with available pods: 0 Dec 18 11:20:06.779: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:20:07.767: INFO: Number of nodes with available pods: 0 Dec 18 11:20:07.767: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:20:08.911: INFO: Number of nodes with available pods: 0 Dec 18 11:20:08.912: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:20:09.923: INFO: Number of nodes with available pods: 0 Dec 18 11:20:09.923: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:20:10.759: INFO: Number of nodes with available pods: 0 Dec 18 11:20:10.759: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:20:11.772: INFO: Number of nodes with available pods: 0 Dec 18 11:20:11.772: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:20:12.764: INFO: Number of nodes with available pods: 0 Dec 18 11:20:12.764: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:20:13.764: INFO: Number of nodes with available pods: 1 Dec 18 11:20:13.764: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Stop a daemon pod, check that the daemon pod is revived. Dec 18 11:20:13.850: INFO: Number of nodes with available pods: 0 Dec 18 11:20:13.850: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:20:14.934: INFO: Number of nodes with available pods: 0 Dec 18 11:20:14.934: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:20:15.918: INFO: Number of nodes with available pods: 0 Dec 18 11:20:15.919: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:20:16.904: INFO: Number of nodes with available pods: 0 Dec 18 11:20:16.904: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:20:17.919: INFO: Number of nodes with available pods: 0 Dec 18 11:20:17.919: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:20:18.890: INFO: Number of nodes with available pods: 0 Dec 18 11:20:18.890: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:20:19.876: INFO: Number of nodes with available pods: 0 Dec 18 11:20:19.876: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:20:20.886: INFO: Number of nodes with available pods: 0 Dec 18 11:20:20.886: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:20:21.879: INFO: Number of nodes with available pods: 0 Dec 18 11:20:21.879: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:20:22.949: INFO: Number of nodes with available pods: 0 Dec 18 11:20:22.949: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:20:23.930: INFO: Number of nodes with available pods: 0 Dec 18 11:20:23.931: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:20:24.909: INFO: Number of nodes with available pods: 0 Dec 18 11:20:24.909: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:20:25.910: INFO: Number of nodes with available pods: 0 Dec 18 11:20:25.910: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:20:27.536: INFO: Number of nodes with available pods: 0 Dec 18 11:20:27.537: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:20:27.968: INFO: Number of nodes with available pods: 0 Dec 18 11:20:27.968: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:20:28.987: INFO: Number of nodes with available pods: 0 Dec 18 11:20:28.987: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:20:29.898: INFO: Number of nodes with available pods: 0 Dec 18 11:20:29.898: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Dec 18 11:20:30.880: INFO: Number of nodes with available pods: 1 Dec 18 11:20:30.880: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-twfgx, will wait for the garbage collector to delete the pods Dec 18 11:20:30.966: INFO: Deleting DaemonSet.extensions daemon-set took: 20.266214ms Dec 18 11:20:31.167: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.949152ms Dec 18 11:20:39.173: INFO: Number of nodes with available pods: 0 Dec 18 11:20:39.173: INFO: Number of running nodes: 0, number of available pods: 0 Dec 18 11:20:39.176: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-twfgx/daemonsets","resourceVersion":"15225033"},"items":null} Dec 18 11:20:39.179: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-twfgx/pods","resourceVersion":"15225033"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 11:20:39.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-twfgx" for this suite. Dec 18 11:20:47.272: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 11:20:47.421: INFO: namespace: e2e-tests-daemonsets-twfgx, resource: bindings, ignored listing per whitelist Dec 18 11:20:47.478: INFO: namespace e2e-tests-daemonsets-twfgx deletion completed in 8.285276469s • [SLOW TEST:46.167 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 11:20:47.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-70a1ac32-2188-11ea-ad77-0242ac110004 STEP: Creating secret with name secret-projected-all-test-volume-70a1ac1d-2188-11ea-ad77-0242ac110004 STEP: Creating a pod to test Check all projections for projected volume plugin Dec 18 11:20:47.739: INFO: Waiting up to 5m0s for pod "projected-volume-70a1ab9d-2188-11ea-ad77-0242ac110004" in namespace "e2e-tests-projected-g52f4" to be "success or failure" Dec 18 11:20:47.757: INFO: Pod "projected-volume-70a1ab9d-2188-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 18.054794ms Dec 18 11:20:49.837: INFO: Pod "projected-volume-70a1ab9d-2188-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097940142s Dec 18 11:20:52.005: INFO: Pod "projected-volume-70a1ab9d-2188-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.265478092s Dec 18 11:20:54.023: INFO: Pod "projected-volume-70a1ab9d-2188-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.284111115s Dec 18 11:20:56.062: INFO: Pod "projected-volume-70a1ab9d-2188-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.322832401s Dec 18 11:20:58.088: INFO: Pod "projected-volume-70a1ab9d-2188-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.349002175s STEP: Saw pod success Dec 18 11:20:58.089: INFO: Pod "projected-volume-70a1ab9d-2188-11ea-ad77-0242ac110004" satisfied condition "success or failure" Dec 18 11:20:58.107: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-70a1ab9d-2188-11ea-ad77-0242ac110004 container projected-all-volume-test: STEP: delete the pod Dec 18 11:20:58.232: INFO: Waiting for pod projected-volume-70a1ab9d-2188-11ea-ad77-0242ac110004 to disappear Dec 18 11:20:58.417: INFO: Pod projected-volume-70a1ab9d-2188-11ea-ad77-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 11:20:58.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-g52f4" for this suite. Dec 18 11:21:04.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 11:21:04.571: INFO: namespace: e2e-tests-projected-g52f4, resource: bindings, ignored listing per whitelist Dec 18 11:21:04.680: INFO: namespace e2e-tests-projected-g52f4 deletion completed in 6.25172389s • [SLOW TEST:17.201 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 11:21:04.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 18 11:21:04.970: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7ae80029-2188-11ea-ad77-0242ac110004" in namespace "e2e-tests-downward-api-gmshq" to be "success or failure" Dec 18 11:21:05.064: INFO: Pod "downwardapi-volume-7ae80029-2188-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 93.130651ms Dec 18 11:21:07.173: INFO: Pod "downwardapi-volume-7ae80029-2188-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.201981053s Dec 18 11:21:09.204: INFO: Pod "downwardapi-volume-7ae80029-2188-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.233190547s Dec 18 11:21:11.397: INFO: Pod "downwardapi-volume-7ae80029-2188-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.426082461s Dec 18 11:21:13.406: INFO: Pod "downwardapi-volume-7ae80029-2188-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.435459321s Dec 18 11:21:15.514: INFO: Pod "downwardapi-volume-7ae80029-2188-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.543386476s STEP: Saw pod success Dec 18 11:21:15.514: INFO: Pod "downwardapi-volume-7ae80029-2188-11ea-ad77-0242ac110004" satisfied condition "success or failure" Dec 18 11:21:15.521: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-7ae80029-2188-11ea-ad77-0242ac110004 container client-container: STEP: delete the pod Dec 18 11:21:15.603: INFO: Waiting for pod downwardapi-volume-7ae80029-2188-11ea-ad77-0242ac110004 to disappear Dec 18 11:21:16.059: INFO: Pod downwardapi-volume-7ae80029-2188-11ea-ad77-0242ac110004 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 11:21:16.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-gmshq" for this suite. Dec 18 11:21:22.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 11:21:23.072: INFO: namespace: e2e-tests-downward-api-gmshq, resource: bindings, ignored listing per whitelist Dec 18 11:21:23.082: INFO: namespace e2e-tests-downward-api-gmshq deletion completed in 7.012148984s • [SLOW TEST:18.402 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 11:21:23.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Dec 18 11:21:23.416: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-q2mkh,SelfLink:/api/v1/namespaces/e2e-tests-watch-q2mkh/configmaps/e2e-watch-test-watch-closed,UID:85d33b5b-2188-11ea-a994-fa163e34d433,ResourceVersion:15225160,Generation:0,CreationTimestamp:2019-12-18 11:21:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 18 11:21:23.416: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-q2mkh,SelfLink:/api/v1/namespaces/e2e-tests-watch-q2mkh/configmaps/e2e-watch-test-watch-closed,UID:85d33b5b-2188-11ea-a994-fa163e34d433,ResourceVersion:15225161,Generation:0,CreationTimestamp:2019-12-18 11:21:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Dec 18 11:21:23.452: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-q2mkh,SelfLink:/api/v1/namespaces/e2e-tests-watch-q2mkh/configmaps/e2e-watch-test-watch-closed,UID:85d33b5b-2188-11ea-a994-fa163e34d433,ResourceVersion:15225162,Generation:0,CreationTimestamp:2019-12-18 11:21:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 18 11:21:23.452: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-q2mkh,SelfLink:/api/v1/namespaces/e2e-tests-watch-q2mkh/configmaps/e2e-watch-test-watch-closed,UID:85d33b5b-2188-11ea-a994-fa163e34d433,ResourceVersion:15225163,Generation:0,CreationTimestamp:2019-12-18 11:21:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 11:21:23.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-q2mkh" for this suite. Dec 18 11:21:29.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 11:21:29.603: INFO: namespace: e2e-tests-watch-q2mkh, resource: bindings, ignored listing per whitelist Dec 18 11:21:29.644: INFO: namespace e2e-tests-watch-q2mkh deletion completed in 6.184748408s • [SLOW TEST:6.562 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 11:21:29.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Dec 18 11:21:30.109: INFO: Waiting up to 5m0s for pod "pod-89dff257-2188-11ea-ad77-0242ac110004" in namespace "e2e-tests-emptydir-6drjl" to be "success or failure" Dec 18 11:21:30.147: INFO: Pod "pod-89dff257-2188-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 37.951082ms Dec 18 11:21:32.176: INFO: Pod "pod-89dff257-2188-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066866255s Dec 18 11:21:34.209: INFO: Pod "pod-89dff257-2188-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099901143s Dec 18 11:21:36.490: INFO: Pod "pod-89dff257-2188-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.380387702s Dec 18 11:21:38.530: INFO: Pod "pod-89dff257-2188-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.421089297s Dec 18 11:21:40.570: INFO: Pod "pod-89dff257-2188-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.461305658s Dec 18 11:21:42.591: INFO: Pod "pod-89dff257-2188-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.482265295s STEP: Saw pod success Dec 18 11:21:42.592: INFO: Pod "pod-89dff257-2188-11ea-ad77-0242ac110004" satisfied condition "success or failure" Dec 18 11:21:42.607: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-89dff257-2188-11ea-ad77-0242ac110004 container test-container: STEP: delete the pod Dec 18 11:21:43.472: INFO: Waiting for pod pod-89dff257-2188-11ea-ad77-0242ac110004 to disappear Dec 18 11:21:43.890: INFO: Pod pod-89dff257-2188-11ea-ad77-0242ac110004 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 11:21:43.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-6drjl" for this suite. Dec 18 11:21:50.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 11:21:50.173: INFO: namespace: e2e-tests-emptydir-6drjl, resource: bindings, ignored listing per whitelist Dec 18 11:21:50.235: INFO: namespace e2e-tests-emptydir-6drjl deletion completed in 6.273973881s • [SLOW TEST:20.591 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 11:21:50.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-95f8a3a0-2188-11ea-ad77-0242ac110004 STEP: Creating configMap with name cm-test-opt-upd-95f8a419-2188-11ea-ad77-0242ac110004 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-95f8a3a0-2188-11ea-ad77-0242ac110004 STEP: Updating configmap cm-test-opt-upd-95f8a419-2188-11ea-ad77-0242ac110004 STEP: Creating configMap with name cm-test-opt-create-95f8a44b-2188-11ea-ad77-0242ac110004 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 11:23:14.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-7cr8w" for this suite. Dec 18 11:23:38.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 11:23:38.979: INFO: namespace: e2e-tests-configmap-7cr8w, resource: bindings, ignored listing per whitelist Dec 18 11:23:39.117: INFO: namespace e2e-tests-configmap-7cr8w deletion completed in 24.270166925s • [SLOW TEST:108.881 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 11:23:39.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-b2558 Dec 18 11:23:51.611: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-b2558 STEP: checking the pod's current state and verifying that restartCount is present Dec 18 11:23:51.623: INFO: Initial restart count of pod liveness-http is 0 Dec 18 11:24:16.670: INFO: Restart count of pod e2e-tests-container-probe-b2558/liveness-http is now 1 (25.046093438s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 11:24:16.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-b2558" for this suite. Dec 18 11:24:24.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 11:24:25.138: INFO: namespace: e2e-tests-container-probe-b2558, resource: bindings, ignored listing per whitelist Dec 18 11:24:25.210: INFO: namespace e2e-tests-container-probe-b2558 deletion completed in 8.330996402s • [SLOW TEST:46.092 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 11:24:25.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 18 11:24:25.457: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f2675446-2188-11ea-ad77-0242ac110004" in namespace "e2e-tests-downward-api-6ddkb" to be "success or failure" Dec 18 11:24:25.531: INFO: Pod "downwardapi-volume-f2675446-2188-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 73.742107ms Dec 18 11:24:27.549: INFO: Pod "downwardapi-volume-f2675446-2188-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091685539s Dec 18 11:24:29.565: INFO: Pod "downwardapi-volume-f2675446-2188-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107859609s Dec 18 11:24:31.709: INFO: Pod "downwardapi-volume-f2675446-2188-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.251751026s Dec 18 11:24:33.912: INFO: Pod "downwardapi-volume-f2675446-2188-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.455486209s Dec 18 11:24:35.933: INFO: Pod "downwardapi-volume-f2675446-2188-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.476115193s Dec 18 11:24:37.952: INFO: Pod "downwardapi-volume-f2675446-2188-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.495523749s STEP: Saw pod success Dec 18 11:24:37.952: INFO: Pod "downwardapi-volume-f2675446-2188-11ea-ad77-0242ac110004" satisfied condition "success or failure" Dec 18 11:24:37.959: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-f2675446-2188-11ea-ad77-0242ac110004 container client-container: STEP: delete the pod Dec 18 11:24:38.738: INFO: Waiting for pod downwardapi-volume-f2675446-2188-11ea-ad77-0242ac110004 to disappear Dec 18 11:24:38.944: INFO: Pod downwardapi-volume-f2675446-2188-11ea-ad77-0242ac110004 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 11:24:38.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-6ddkb" for this suite. Dec 18 11:24:45.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 11:24:45.112: INFO: namespace: e2e-tests-downward-api-6ddkb, resource: bindings, ignored listing per whitelist Dec 18 11:24:45.143: INFO: namespace e2e-tests-downward-api-6ddkb deletion completed in 6.184099955s • [SLOW TEST:19.932 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 11:24:45.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info Dec 18 11:24:45.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Dec 18 11:24:47.492: INFO: stderr: "" Dec 18 11:24:47.493: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 11:24:47.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-5spgv" for this suite. Dec 18 11:24:53.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 11:24:53.893: INFO: namespace: e2e-tests-kubectl-5spgv, resource: bindings, ignored listing per whitelist Dec 18 11:24:54.065: INFO: namespace e2e-tests-kubectl-5spgv deletion completed in 6.447341358s • [SLOW TEST:8.922 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 11:24:54.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc Dec 18 11:24:54.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-q25wn' Dec 18 11:24:54.802: INFO: stderr: "" Dec 18 11:24:54.803: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. Dec 18 11:24:56.550: INFO: Selector matched 1 pods for map[app:redis] Dec 18 11:24:56.551: INFO: Found 0 / 1 Dec 18 11:24:57.057: INFO: Selector matched 1 pods for map[app:redis] Dec 18 11:24:57.057: INFO: Found 0 / 1 Dec 18 11:24:57.832: INFO: Selector matched 1 pods for map[app:redis] Dec 18 11:24:57.833: INFO: Found 0 / 1 Dec 18 11:24:58.827: INFO: Selector matched 1 pods for map[app:redis] Dec 18 11:24:58.827: INFO: Found 0 / 1 Dec 18 11:24:59.890: INFO: Selector matched 1 pods for map[app:redis] Dec 18 11:24:59.891: INFO: Found 0 / 1 Dec 18 11:25:01.030: INFO: Selector matched 1 pods for map[app:redis] Dec 18 11:25:01.030: INFO: Found 0 / 1 Dec 18 11:25:01.936: INFO: Selector matched 1 pods for map[app:redis] Dec 18 11:25:01.936: INFO: Found 0 / 1 Dec 18 11:25:02.817: INFO: Selector matched 1 pods for map[app:redis] Dec 18 11:25:02.817: INFO: Found 0 / 1 Dec 18 11:25:03.914: INFO: Selector matched 1 pods for map[app:redis] Dec 18 11:25:03.914: INFO: Found 0 / 1 Dec 18 11:25:04.827: INFO: Selector matched 1 pods for map[app:redis] Dec 18 11:25:04.827: INFO: Found 0 / 1 Dec 18 11:25:05.836: INFO: Selector matched 1 pods for map[app:redis] Dec 18 11:25:05.836: INFO: Found 1 / 1 Dec 18 11:25:05.836: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Dec 18 11:25:05.857: INFO: Selector matched 1 pods for map[app:redis] Dec 18 11:25:05.857: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Dec 18 11:25:05.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-pwzq4 redis-master --namespace=e2e-tests-kubectl-q25wn' Dec 18 11:25:06.125: INFO: stderr: "" Dec 18 11:25:06.125: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 18 Dec 11:25:04.098 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 18 Dec 11:25:04.098 # Server started, Redis version 3.2.12\n1:M 18 Dec 11:25:04.099 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 18 Dec 11:25:04.099 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Dec 18 11:25:06.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-pwzq4 redis-master --namespace=e2e-tests-kubectl-q25wn --tail=1' Dec 18 11:25:06.266: INFO: stderr: "" Dec 18 11:25:06.267: INFO: stdout: "1:M 18 Dec 11:25:04.099 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Dec 18 11:25:06.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-pwzq4 redis-master --namespace=e2e-tests-kubectl-q25wn --limit-bytes=1' Dec 18 11:25:06.531: INFO: stderr: "" Dec 18 11:25:06.531: INFO: stdout: " " STEP: exposing timestamps Dec 18 11:25:06.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-pwzq4 redis-master --namespace=e2e-tests-kubectl-q25wn --tail=1 --timestamps' Dec 18 11:25:06.718: INFO: stderr: "" Dec 18 11:25:06.718: INFO: stdout: "2019-12-18T11:25:04.099463417Z 1:M 18 Dec 11:25:04.099 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Dec 18 11:25:09.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-pwzq4 redis-master --namespace=e2e-tests-kubectl-q25wn --since=1s' Dec 18 11:25:09.596: INFO: stderr: "" Dec 18 11:25:09.597: INFO: stdout: "" Dec 18 11:25:09.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-pwzq4 redis-master --namespace=e2e-tests-kubectl-q25wn --since=24h' Dec 18 11:25:09.867: INFO: stderr: "" Dec 18 11:25:09.867: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 18 Dec 11:25:04.098 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 18 Dec 11:25:04.098 # Server started, Redis version 3.2.12\n1:M 18 Dec 11:25:04.099 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 18 Dec 11:25:04.099 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources Dec 18 11:25:09.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-q25wn' Dec 18 11:25:10.157: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 18 11:25:10.157: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Dec 18 11:25:10.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-q25wn' Dec 18 11:25:10.511: INFO: stderr: "No resources found.\n" Dec 18 11:25:10.511: INFO: stdout: "" Dec 18 11:25:10.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-q25wn -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 18 11:25:10.940: INFO: stderr: "" Dec 18 11:25:10.940: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 11:25:10.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-q25wn" for this suite. Dec 18 11:25:35.010: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 11:25:35.151: INFO: namespace: e2e-tests-kubectl-q25wn, resource: bindings, ignored listing per whitelist Dec 18 11:25:35.162: INFO: namespace e2e-tests-kubectl-q25wn deletion completed in 24.206812414s • [SLOW TEST:41.095 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 11:25:35.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Dec 18 11:25:35.566: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-vcc6s,SelfLink:/api/v1/namespaces/e2e-tests-watch-vcc6s/configmaps/e2e-watch-test-resource-version,UID:1c2b5ef5-2189-11ea-a994-fa163e34d433,ResourceVersion:15225636,Generation:0,CreationTimestamp:2019-12-18 11:25:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 18 11:25:35.566: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-vcc6s,SelfLink:/api/v1/namespaces/e2e-tests-watch-vcc6s/configmaps/e2e-watch-test-resource-version,UID:1c2b5ef5-2189-11ea-a994-fa163e34d433,ResourceVersion:15225637,Generation:0,CreationTimestamp:2019-12-18 11:25:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 11:25:35.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-vcc6s" for this suite. Dec 18 11:25:41.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 11:25:41.949: INFO: namespace: e2e-tests-watch-vcc6s, resource: bindings, ignored listing per whitelist Dec 18 11:25:41.959: INFO: namespace e2e-tests-watch-vcc6s deletion completed in 6.386164702s • [SLOW TEST:6.797 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 11:25:41.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-dpsm6.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-dpsm6.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-dpsm6.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-dpsm6.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-dpsm6.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-dpsm6.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 18 11:26:00.274: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-dpsm6/dns-test-201cfbbd-2189-11ea-ad77-0242ac110004: the server could not find the requested resource (get pods dns-test-201cfbbd-2189-11ea-ad77-0242ac110004) Dec 18 11:26:00.280: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-dpsm6/dns-test-201cfbbd-2189-11ea-ad77-0242ac110004: the server could not find the requested resource (get pods dns-test-201cfbbd-2189-11ea-ad77-0242ac110004) Dec 18 11:26:00.291: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-dpsm6/dns-test-201cfbbd-2189-11ea-ad77-0242ac110004: the server could not find the requested resource (get pods dns-test-201cfbbd-2189-11ea-ad77-0242ac110004) Dec 18 11:26:00.300: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-dpsm6/dns-test-201cfbbd-2189-11ea-ad77-0242ac110004: the server could not find the requested resource (get pods dns-test-201cfbbd-2189-11ea-ad77-0242ac110004) Dec 18 11:26:00.308: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-dpsm6/dns-test-201cfbbd-2189-11ea-ad77-0242ac110004: the server could not find the requested resource (get pods dns-test-201cfbbd-2189-11ea-ad77-0242ac110004) Dec 18 11:26:00.315: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-dpsm6/dns-test-201cfbbd-2189-11ea-ad77-0242ac110004: the server could not find the requested resource (get pods dns-test-201cfbbd-2189-11ea-ad77-0242ac110004) Dec 18 11:26:00.323: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-dpsm6.svc.cluster.local from pod e2e-tests-dns-dpsm6/dns-test-201cfbbd-2189-11ea-ad77-0242ac110004: the server could not find the requested resource (get pods dns-test-201cfbbd-2189-11ea-ad77-0242ac110004) Dec 18 11:26:00.333: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-dpsm6/dns-test-201cfbbd-2189-11ea-ad77-0242ac110004: the server could not find the requested resource (get pods dns-test-201cfbbd-2189-11ea-ad77-0242ac110004) Dec 18 11:26:00.342: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-dpsm6/dns-test-201cfbbd-2189-11ea-ad77-0242ac110004: the server could not find the requested resource (get pods dns-test-201cfbbd-2189-11ea-ad77-0242ac110004) Dec 18 11:26:00.349: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-dpsm6/dns-test-201cfbbd-2189-11ea-ad77-0242ac110004: the server could not find the requested resource (get pods dns-test-201cfbbd-2189-11ea-ad77-0242ac110004) Dec 18 11:26:00.349: INFO: Lookups using e2e-tests-dns-dpsm6/dns-test-201cfbbd-2189-11ea-ad77-0242ac110004 failed for: [jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-dpsm6.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord] Dec 18 11:26:05.512: INFO: DNS probes using e2e-tests-dns-dpsm6/dns-test-201cfbbd-2189-11ea-ad77-0242ac110004 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 18 11:26:05.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-dpsm6" for this suite. Dec 18 11:26:14.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 11:26:14.142: INFO: namespace: e2e-tests-dns-dpsm6, resource: bindings, ignored listing per whitelist Dec 18 11:26:14.212: INFO: namespace e2e-tests-dns-dpsm6 deletion completed in 8.359752023s • [SLOW TEST:32.253 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 18 11:26:14.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 18 11:26:14.550: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/:
alternatives.log
alternatives.l... (200; 25.767756ms)
Dec 18 11:26:14.564: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.413456ms)
Dec 18 11:26:14.586: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 22.299268ms)
Dec 18 11:26:14.601: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.855376ms)
Dec 18 11:26:14.611: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.790735ms)
Dec 18 11:26:14.622: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.979977ms)
Dec 18 11:26:14.629: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.307642ms)
Dec 18 11:26:14.635: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.196237ms)
Dec 18 11:26:14.642: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.865483ms)
Dec 18 11:26:14.647: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.800942ms)
Dec 18 11:26:14.653: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.001608ms)
Dec 18 11:26:14.658: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.518755ms)
Dec 18 11:26:14.663: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.379723ms)
Dec 18 11:26:14.674: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.107603ms)
Dec 18 11:26:14.679: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.261339ms)
Dec 18 11:26:14.683: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.314789ms)
Dec 18 11:26:14.688: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.349607ms)
Dec 18 11:26:14.693: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.435109ms)
Dec 18 11:26:14.697: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.355386ms)
Dec 18 11:26:14.702: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.464304ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:26:14.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-xg4p9" for this suite.
Dec 18 11:26:20.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:26:20.874: INFO: namespace: e2e-tests-proxy-xg4p9, resource: bindings, ignored listing per whitelist
Dec 18 11:26:21.069: INFO: namespace e2e-tests-proxy-xg4p9 deletion completed in 6.362448013s

• [SLOW TEST:6.856 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:26:21.070: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 18 11:26:21.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Dec 18 11:26:21.463: INFO: stderr: ""
Dec 18 11:26:21.463: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-14T21:37:42Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:26:21.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-vgfwt" for this suite.
Dec 18 11:26:27.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:26:27.677: INFO: namespace: e2e-tests-kubectl-vgfwt, resource: bindings, ignored listing per whitelist
Dec 18 11:26:27.715: INFO: namespace e2e-tests-kubectl-vgfwt deletion completed in 6.236026949s

• [SLOW TEST:6.645 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:26:27.715: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 18 11:26:28.088: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3b7d342e-2189-11ea-ad77-0242ac110004" in namespace "e2e-tests-downward-api-74bcd" to be "success or failure"
Dec 18 11:26:28.118: INFO: Pod "downwardapi-volume-3b7d342e-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 29.699357ms
Dec 18 11:26:30.440: INFO: Pod "downwardapi-volume-3b7d342e-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.351735676s
Dec 18 11:26:32.474: INFO: Pod "downwardapi-volume-3b7d342e-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.385904596s
Dec 18 11:26:34.851: INFO: Pod "downwardapi-volume-3b7d342e-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.762631605s
Dec 18 11:26:36.885: INFO: Pod "downwardapi-volume-3b7d342e-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.796079099s
Dec 18 11:26:38.900: INFO: Pod "downwardapi-volume-3b7d342e-2189-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.811866311s
STEP: Saw pod success
Dec 18 11:26:38.901: INFO: Pod "downwardapi-volume-3b7d342e-2189-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 11:26:38.905: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-3b7d342e-2189-11ea-ad77-0242ac110004 container client-container: 
STEP: delete the pod
Dec 18 11:26:39.509: INFO: Waiting for pod downwardapi-volume-3b7d342e-2189-11ea-ad77-0242ac110004 to disappear
Dec 18 11:26:39.663: INFO: Pod downwardapi-volume-3b7d342e-2189-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:26:39.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-74bcd" for this suite.
Dec 18 11:26:46.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:26:46.118: INFO: namespace: e2e-tests-downward-api-74bcd, resource: bindings, ignored listing per whitelist
Dec 18 11:26:46.210: INFO: namespace e2e-tests-downward-api-74bcd deletion completed in 6.52017428s

• [SLOW TEST:18.495 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:26:46.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Dec 18 11:26:46.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4fxq8'
Dec 18 11:26:47.115: INFO: stderr: ""
Dec 18 11:26:47.115: INFO: stdout: "pod/pause created\n"
Dec 18 11:26:47.115: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Dec 18 11:26:47.116: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-4fxq8" to be "running and ready"
Dec 18 11:26:47.158: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 42.646889ms
Dec 18 11:26:49.173: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057326288s
Dec 18 11:26:51.269: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.15345966s
Dec 18 11:26:54.036: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.920318556s
Dec 18 11:26:56.068: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.952480996s
Dec 18 11:26:58.088: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.972433815s
Dec 18 11:26:58.088: INFO: Pod "pause" satisfied condition "running and ready"
Dec 18 11:26:58.088: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Dec 18 11:26:58.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-4fxq8'
Dec 18 11:26:58.308: INFO: stderr: ""
Dec 18 11:26:58.308: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Dec 18 11:26:58.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-4fxq8'
Dec 18 11:26:58.501: INFO: stderr: ""
Dec 18 11:26:58.501: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Dec 18 11:26:58.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-4fxq8'
Dec 18 11:26:58.677: INFO: stderr: ""
Dec 18 11:26:58.677: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Dec 18 11:26:58.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-4fxq8'
Dec 18 11:26:58.807: INFO: stderr: ""
Dec 18 11:26:58.807: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Dec 18 11:26:58.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-4fxq8'
Dec 18 11:26:59.009: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 18 11:26:59.009: INFO: stdout: "pod \"pause\" force deleted\n"
Dec 18 11:26:59.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-4fxq8'
Dec 18 11:26:59.184: INFO: stderr: "No resources found.\n"
Dec 18 11:26:59.184: INFO: stdout: ""
Dec 18 11:26:59.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-4fxq8 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 18 11:26:59.303: INFO: stderr: ""
Dec 18 11:26:59.304: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:26:59.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-4fxq8" for this suite.
Dec 18 11:27:07.362: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:27:07.613: INFO: namespace: e2e-tests-kubectl-4fxq8, resource: bindings, ignored listing per whitelist
Dec 18 11:27:07.656: INFO: namespace e2e-tests-kubectl-4fxq8 deletion completed in 8.341263834s

• [SLOW TEST:21.445 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:27:07.657: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-5342b913-2189-11ea-ad77-0242ac110004
STEP: Creating a pod to test consume secrets
Dec 18 11:27:07.952: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-53440b9c-2189-11ea-ad77-0242ac110004" in namespace "e2e-tests-projected-s8gxz" to be "success or failure"
Dec 18 11:27:08.033: INFO: Pod "pod-projected-secrets-53440b9c-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 81.095413ms
Dec 18 11:27:10.500: INFO: Pod "pod-projected-secrets-53440b9c-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.548313529s
Dec 18 11:27:12.520: INFO: Pod "pod-projected-secrets-53440b9c-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.568275076s
Dec 18 11:27:14.540: INFO: Pod "pod-projected-secrets-53440b9c-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.588079875s
Dec 18 11:27:16.564: INFO: Pod "pod-projected-secrets-53440b9c-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.612409929s
Dec 18 11:27:18.587: INFO: Pod "pod-projected-secrets-53440b9c-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.635349185s
Dec 18 11:27:20.769: INFO: Pod "pod-projected-secrets-53440b9c-2189-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.816711395s
STEP: Saw pod success
Dec 18 11:27:20.769: INFO: Pod "pod-projected-secrets-53440b9c-2189-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 11:27:20.800: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-53440b9c-2189-11ea-ad77-0242ac110004 container projected-secret-volume-test: 
STEP: delete the pod
Dec 18 11:27:21.911: INFO: Waiting for pod pod-projected-secrets-53440b9c-2189-11ea-ad77-0242ac110004 to disappear
Dec 18 11:27:21.931: INFO: Pod pod-projected-secrets-53440b9c-2189-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:27:21.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-s8gxz" for this suite.
Dec 18 11:27:28.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:27:28.227: INFO: namespace: e2e-tests-projected-s8gxz, resource: bindings, ignored listing per whitelist
Dec 18 11:27:28.296: INFO: namespace e2e-tests-projected-s8gxz deletion completed in 6.243058917s

• [SLOW TEST:20.639 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:27:28.297: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W1218 11:27:31.674623       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 18 11:27:31.674: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:27:31.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-bbbv5" for this suite.
Dec 18 11:27:39.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:27:39.069: INFO: namespace: e2e-tests-gc-bbbv5, resource: bindings, ignored listing per whitelist
Dec 18 11:27:39.202: INFO: namespace e2e-tests-gc-bbbv5 deletion completed in 7.522301662s

• [SLOW TEST:10.905 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:27:39.202: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 18 11:27:39.463: INFO: Waiting up to 5m0s for pod "downwardapi-volume-660af800-2189-11ea-ad77-0242ac110004" in namespace "e2e-tests-projected-n5sl2" to be "success or failure"
Dec 18 11:27:39.591: INFO: Pod "downwardapi-volume-660af800-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 128.086899ms
Dec 18 11:27:41.603: INFO: Pod "downwardapi-volume-660af800-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.139792485s
Dec 18 11:27:43.667: INFO: Pod "downwardapi-volume-660af800-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.203672619s
Dec 18 11:27:46.021: INFO: Pod "downwardapi-volume-660af800-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.557938585s
Dec 18 11:27:48.917: INFO: Pod "downwardapi-volume-660af800-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.454339741s
Dec 18 11:27:51.050: INFO: Pod "downwardapi-volume-660af800-2189-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.587117709s
STEP: Saw pod success
Dec 18 11:27:51.050: INFO: Pod "downwardapi-volume-660af800-2189-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 11:27:51.060: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-660af800-2189-11ea-ad77-0242ac110004 container client-container: 
STEP: delete the pod
Dec 18 11:27:51.141: INFO: Waiting for pod downwardapi-volume-660af800-2189-11ea-ad77-0242ac110004 to disappear
Dec 18 11:27:51.217: INFO: Pod downwardapi-volume-660af800-2189-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:27:51.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-n5sl2" for this suite.
Dec 18 11:27:59.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:27:59.415: INFO: namespace: e2e-tests-projected-n5sl2, resource: bindings, ignored listing per whitelist
Dec 18 11:27:59.463: INFO: namespace e2e-tests-projected-n5sl2 deletion completed in 8.239202295s

• [SLOW TEST:20.261 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:27:59.464: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 18 11:27:59.866: INFO: Waiting up to 5m0s for pod "downward-api-72326d73-2189-11ea-ad77-0242ac110004" in namespace "e2e-tests-downward-api-mc8sm" to be "success or failure"
Dec 18 11:27:59.906: INFO: Pod "downward-api-72326d73-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 39.234431ms
Dec 18 11:28:02.153: INFO: Pod "downward-api-72326d73-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.286362628s
Dec 18 11:28:04.185: INFO: Pod "downward-api-72326d73-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.318505891s
Dec 18 11:28:06.405: INFO: Pod "downward-api-72326d73-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.538029347s
Dec 18 11:28:08.876: INFO: Pod "downward-api-72326d73-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.009840981s
Dec 18 11:28:10.913: INFO: Pod "downward-api-72326d73-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.046486945s
Dec 18 11:28:12.968: INFO: Pod "downward-api-72326d73-2189-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.101295761s
STEP: Saw pod success
Dec 18 11:28:12.968: INFO: Pod "downward-api-72326d73-2189-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 11:28:12.980: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-72326d73-2189-11ea-ad77-0242ac110004 container dapi-container: 
STEP: delete the pod
Dec 18 11:28:13.340: INFO: Waiting for pod downward-api-72326d73-2189-11ea-ad77-0242ac110004 to disappear
Dec 18 11:28:13.366: INFO: Pod downward-api-72326d73-2189-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:28:13.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-mc8sm" for this suite.
Dec 18 11:28:19.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:28:19.549: INFO: namespace: e2e-tests-downward-api-mc8sm, resource: bindings, ignored listing per whitelist
Dec 18 11:28:20.030: INFO: namespace e2e-tests-downward-api-mc8sm deletion completed in 6.655010573s

• [SLOW TEST:20.567 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:28:20.031: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Dec 18 11:28:21.630: INFO: Waiting up to 5m0s for pod "pod-7f2b9c84-2189-11ea-ad77-0242ac110004" in namespace "e2e-tests-emptydir-8cr6h" to be "success or failure"
Dec 18 11:28:21.668: INFO: Pod "pod-7f2b9c84-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 38.137574ms
Dec 18 11:28:24.115: INFO: Pod "pod-7f2b9c84-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.485604736s
Dec 18 11:28:26.137: INFO: Pod "pod-7f2b9c84-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.5074138s
Dec 18 11:28:28.289: INFO: Pod "pod-7f2b9c84-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.659045235s
Dec 18 11:28:30.316: INFO: Pod "pod-7f2b9c84-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.686602147s
Dec 18 11:28:32.345: INFO: Pod "pod-7f2b9c84-2189-11ea-ad77-0242ac110004": Phase="Running", Reason="", readiness=true. Elapsed: 10.714737383s
Dec 18 11:28:34.476: INFO: Pod "pod-7f2b9c84-2189-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.84568457s
STEP: Saw pod success
Dec 18 11:28:34.476: INFO: Pod "pod-7f2b9c84-2189-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 11:28:34.488: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-7f2b9c84-2189-11ea-ad77-0242ac110004 container test-container: 
STEP: delete the pod
Dec 18 11:28:34.930: INFO: Waiting for pod pod-7f2b9c84-2189-11ea-ad77-0242ac110004 to disappear
Dec 18 11:28:34.994: INFO: Pod pod-7f2b9c84-2189-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:28:34.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-8cr6h" for this suite.
Dec 18 11:28:41.098: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:28:41.139: INFO: namespace: e2e-tests-emptydir-8cr6h, resource: bindings, ignored listing per whitelist
Dec 18 11:28:41.296: INFO: namespace e2e-tests-emptydir-8cr6h deletion completed in 6.283029286s

• [SLOW TEST:21.265 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:28:41.296: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-8b045e88-2189-11ea-ad77-0242ac110004
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:28:55.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-74zcm" for this suite.
Dec 18 11:29:19.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:29:19.799: INFO: namespace: e2e-tests-configmap-74zcm, resource: bindings, ignored listing per whitelist
Dec 18 11:29:19.884: INFO: namespace e2e-tests-configmap-74zcm deletion completed in 24.277144744s

• [SLOW TEST:38.588 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:29:19.885: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-6nh8r A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-6nh8r;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-6nh8r A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-6nh8r;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-6nh8r.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-6nh8r.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-6nh8r.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-6nh8r.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-6nh8r.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-6nh8r.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-6nh8r.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-6nh8r.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-6nh8r.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-6nh8r.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-6nh8r.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-6nh8r.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-6nh8r.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 238.116.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.116.238_udp@PTR;check="$$(dig +tcp +noall +answer +search 238.116.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.116.238_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-6nh8r A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-6nh8r;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-6nh8r A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-6nh8r;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-6nh8r.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-6nh8r.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-6nh8r.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-6nh8r.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-6nh8r.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-6nh8r.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-6nh8r.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-6nh8r.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-6nh8r.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-6nh8r.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-6nh8r.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-6nh8r.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-6nh8r.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 238.116.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.116.238_udp@PTR;check="$$(dig +tcp +noall +answer +search 238.116.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.116.238_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 18 11:29:40.632: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-6nh8r/dns-test-a226e45c-2189-11ea-ad77-0242ac110004: the server could not find the requested resource (get pods dns-test-a226e45c-2189-11ea-ad77-0242ac110004)
Dec 18 11:29:40.651: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-6nh8r/dns-test-a226e45c-2189-11ea-ad77-0242ac110004: the server could not find the requested resource (get pods dns-test-a226e45c-2189-11ea-ad77-0242ac110004)
Dec 18 11:29:40.696: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-6nh8r from pod e2e-tests-dns-6nh8r/dns-test-a226e45c-2189-11ea-ad77-0242ac110004: the server could not find the requested resource (get pods dns-test-a226e45c-2189-11ea-ad77-0242ac110004)
Dec 18 11:29:40.714: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-6nh8r from pod e2e-tests-dns-6nh8r/dns-test-a226e45c-2189-11ea-ad77-0242ac110004: the server could not find the requested resource (get pods dns-test-a226e45c-2189-11ea-ad77-0242ac110004)
Dec 18 11:29:40.724: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-6nh8r.svc from pod e2e-tests-dns-6nh8r/dns-test-a226e45c-2189-11ea-ad77-0242ac110004: the server could not find the requested resource (get pods dns-test-a226e45c-2189-11ea-ad77-0242ac110004)
Dec 18 11:29:40.732: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-6nh8r.svc from pod e2e-tests-dns-6nh8r/dns-test-a226e45c-2189-11ea-ad77-0242ac110004: the server could not find the requested resource (get pods dns-test-a226e45c-2189-11ea-ad77-0242ac110004)
Dec 18 11:29:40.737: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-6nh8r.svc from pod e2e-tests-dns-6nh8r/dns-test-a226e45c-2189-11ea-ad77-0242ac110004: the server could not find the requested resource (get pods dns-test-a226e45c-2189-11ea-ad77-0242ac110004)
Dec 18 11:29:40.742: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-6nh8r.svc from pod e2e-tests-dns-6nh8r/dns-test-a226e45c-2189-11ea-ad77-0242ac110004: the server could not find the requested resource (get pods dns-test-a226e45c-2189-11ea-ad77-0242ac110004)
Dec 18 11:29:40.749: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-6nh8r.svc from pod e2e-tests-dns-6nh8r/dns-test-a226e45c-2189-11ea-ad77-0242ac110004: the server could not find the requested resource (get pods dns-test-a226e45c-2189-11ea-ad77-0242ac110004)
Dec 18 11:29:40.754: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-6nh8r.svc from pod e2e-tests-dns-6nh8r/dns-test-a226e45c-2189-11ea-ad77-0242ac110004: the server could not find the requested resource (get pods dns-test-a226e45c-2189-11ea-ad77-0242ac110004)
Dec 18 11:29:40.758: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-6nh8r/dns-test-a226e45c-2189-11ea-ad77-0242ac110004: the server could not find the requested resource (get pods dns-test-a226e45c-2189-11ea-ad77-0242ac110004)
Dec 18 11:29:40.762: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-6nh8r/dns-test-a226e45c-2189-11ea-ad77-0242ac110004: the server could not find the requested resource (get pods dns-test-a226e45c-2189-11ea-ad77-0242ac110004)
Dec 18 11:29:40.765: INFO: Unable to read 10.104.116.238_udp@PTR from pod e2e-tests-dns-6nh8r/dns-test-a226e45c-2189-11ea-ad77-0242ac110004: the server could not find the requested resource (get pods dns-test-a226e45c-2189-11ea-ad77-0242ac110004)
Dec 18 11:29:40.770: INFO: Unable to read 10.104.116.238_tcp@PTR from pod e2e-tests-dns-6nh8r/dns-test-a226e45c-2189-11ea-ad77-0242ac110004: the server could not find the requested resource (get pods dns-test-a226e45c-2189-11ea-ad77-0242ac110004)
Dec 18 11:29:40.774: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-6nh8r/dns-test-a226e45c-2189-11ea-ad77-0242ac110004: the server could not find the requested resource (get pods dns-test-a226e45c-2189-11ea-ad77-0242ac110004)
Dec 18 11:29:40.786: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-6nh8r/dns-test-a226e45c-2189-11ea-ad77-0242ac110004: the server could not find the requested resource (get pods dns-test-a226e45c-2189-11ea-ad77-0242ac110004)
Dec 18 11:29:40.797: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-6nh8r from pod e2e-tests-dns-6nh8r/dns-test-a226e45c-2189-11ea-ad77-0242ac110004: the server could not find the requested resource (get pods dns-test-a226e45c-2189-11ea-ad77-0242ac110004)
Dec 18 11:29:40.807: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-6nh8r from pod e2e-tests-dns-6nh8r/dns-test-a226e45c-2189-11ea-ad77-0242ac110004: the server could not find the requested resource (get pods dns-test-a226e45c-2189-11ea-ad77-0242ac110004)
Dec 18 11:29:40.816: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-6nh8r.svc from pod e2e-tests-dns-6nh8r/dns-test-a226e45c-2189-11ea-ad77-0242ac110004: the server could not find the requested resource (get pods dns-test-a226e45c-2189-11ea-ad77-0242ac110004)
Dec 18 11:29:40.830: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-6nh8r.svc from pod e2e-tests-dns-6nh8r/dns-test-a226e45c-2189-11ea-ad77-0242ac110004: the server could not find the requested resource (get pods dns-test-a226e45c-2189-11ea-ad77-0242ac110004)
Dec 18 11:29:40.837: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-6nh8r.svc from pod e2e-tests-dns-6nh8r/dns-test-a226e45c-2189-11ea-ad77-0242ac110004: the server could not find the requested resource (get pods dns-test-a226e45c-2189-11ea-ad77-0242ac110004)
Dec 18 11:29:40.856: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-6nh8r.svc from pod e2e-tests-dns-6nh8r/dns-test-a226e45c-2189-11ea-ad77-0242ac110004: the server could not find the requested resource (get pods dns-test-a226e45c-2189-11ea-ad77-0242ac110004)
Dec 18 11:29:40.879: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-6nh8r.svc from pod e2e-tests-dns-6nh8r/dns-test-a226e45c-2189-11ea-ad77-0242ac110004: the server could not find the requested resource (get pods dns-test-a226e45c-2189-11ea-ad77-0242ac110004)
Dec 18 11:29:40.893: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-6nh8r.svc from pod e2e-tests-dns-6nh8r/dns-test-a226e45c-2189-11ea-ad77-0242ac110004: the server could not find the requested resource (get pods dns-test-a226e45c-2189-11ea-ad77-0242ac110004)
Dec 18 11:29:40.904: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-6nh8r/dns-test-a226e45c-2189-11ea-ad77-0242ac110004: the server could not find the requested resource (get pods dns-test-a226e45c-2189-11ea-ad77-0242ac110004)
Dec 18 11:29:40.909: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-6nh8r/dns-test-a226e45c-2189-11ea-ad77-0242ac110004: the server could not find the requested resource (get pods dns-test-a226e45c-2189-11ea-ad77-0242ac110004)
Dec 18 11:29:40.921: INFO: Unable to read 10.104.116.238_udp@PTR from pod e2e-tests-dns-6nh8r/dns-test-a226e45c-2189-11ea-ad77-0242ac110004: the server could not find the requested resource (get pods dns-test-a226e45c-2189-11ea-ad77-0242ac110004)
Dec 18 11:29:40.925: INFO: Unable to read 10.104.116.238_tcp@PTR from pod e2e-tests-dns-6nh8r/dns-test-a226e45c-2189-11ea-ad77-0242ac110004: the server could not find the requested resource (get pods dns-test-a226e45c-2189-11ea-ad77-0242ac110004)
Dec 18 11:29:40.925: INFO: Lookups using e2e-tests-dns-6nh8r/dns-test-a226e45c-2189-11ea-ad77-0242ac110004 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-6nh8r wheezy_tcp@dns-test-service.e2e-tests-dns-6nh8r wheezy_udp@dns-test-service.e2e-tests-dns-6nh8r.svc wheezy_tcp@dns-test-service.e2e-tests-dns-6nh8r.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-6nh8r.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-6nh8r.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-6nh8r.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-6nh8r.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.104.116.238_udp@PTR 10.104.116.238_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-6nh8r jessie_tcp@dns-test-service.e2e-tests-dns-6nh8r jessie_udp@dns-test-service.e2e-tests-dns-6nh8r.svc jessie_tcp@dns-test-service.e2e-tests-dns-6nh8r.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-6nh8r.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-6nh8r.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-6nh8r.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-6nh8r.svc jessie_udp@PodARecord jessie_tcp@PodARecord 10.104.116.238_udp@PTR 10.104.116.238_tcp@PTR]

Dec 18 11:29:46.362: INFO: DNS probes using e2e-tests-dns-6nh8r/dns-test-a226e45c-2189-11ea-ad77-0242ac110004 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:29:47.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-6nh8r" for this suite.
Dec 18 11:29:55.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:29:55.469: INFO: namespace: e2e-tests-dns-6nh8r, resource: bindings, ignored listing per whitelist
Dec 18 11:29:55.482: INFO: namespace e2e-tests-dns-6nh8r deletion completed in 8.419601633s

• [SLOW TEST:35.597 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:29:55.482: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 18 11:29:55.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-6p257'
Dec 18 11:29:55.842: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 18 11:29:55.842: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Dec 18 11:29:57.922: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-7t54q]
Dec 18 11:29:57.922: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-7t54q" in namespace "e2e-tests-kubectl-6p257" to be "running and ready"
Dec 18 11:29:57.927: INFO: Pod "e2e-test-nginx-rc-7t54q": Phase="Pending", Reason="", readiness=false. Elapsed: 4.813762ms
Dec 18 11:29:59.946: INFO: Pod "e2e-test-nginx-rc-7t54q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023088812s
Dec 18 11:30:02.011: INFO: Pod "e2e-test-nginx-rc-7t54q": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088561835s
Dec 18 11:30:04.035: INFO: Pod "e2e-test-nginx-rc-7t54q": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112935488s
Dec 18 11:30:06.066: INFO: Pod "e2e-test-nginx-rc-7t54q": Phase="Running", Reason="", readiness=true. Elapsed: 8.143698082s
Dec 18 11:30:06.066: INFO: Pod "e2e-test-nginx-rc-7t54q" satisfied condition "running and ready"
Dec 18 11:30:06.066: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-7t54q]
Dec 18 11:30:06.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-6p257'
Dec 18 11:30:06.330: INFO: stderr: ""
Dec 18 11:30:06.330: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Dec 18 11:30:06.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-6p257'
Dec 18 11:30:06.493: INFO: stderr: ""
Dec 18 11:30:06.494: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:30:06.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-6p257" for this suite.
Dec 18 11:30:28.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:30:28.724: INFO: namespace: e2e-tests-kubectl-6p257, resource: bindings, ignored listing per whitelist
Dec 18 11:30:28.822: INFO: namespace e2e-tests-kubectl-6p257 deletion completed in 22.3195052s

• [SLOW TEST:33.340 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:30:28.823: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-cb13802b-2189-11ea-ad77-0242ac110004
STEP: Creating a pod to test consume configMaps
Dec 18 11:30:29.045: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cb1de3de-2189-11ea-ad77-0242ac110004" in namespace "e2e-tests-projected-cwt66" to be "success or failure"
Dec 18 11:30:29.054: INFO: Pod "pod-projected-configmaps-cb1de3de-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.142421ms
Dec 18 11:30:31.074: INFO: Pod "pod-projected-configmaps-cb1de3de-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028309258s
Dec 18 11:30:33.083: INFO: Pod "pod-projected-configmaps-cb1de3de-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037045045s
Dec 18 11:30:35.104: INFO: Pod "pod-projected-configmaps-cb1de3de-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058361102s
Dec 18 11:30:37.124: INFO: Pod "pod-projected-configmaps-cb1de3de-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.078332133s
Dec 18 11:30:39.146: INFO: Pod "pod-projected-configmaps-cb1de3de-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.100885368s
Dec 18 11:30:41.159: INFO: Pod "pod-projected-configmaps-cb1de3de-2189-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.113744537s
STEP: Saw pod success
Dec 18 11:30:41.159: INFO: Pod "pod-projected-configmaps-cb1de3de-2189-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 11:30:41.166: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-cb1de3de-2189-11ea-ad77-0242ac110004 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 18 11:30:42.250: INFO: Waiting for pod pod-projected-configmaps-cb1de3de-2189-11ea-ad77-0242ac110004 to disappear
Dec 18 11:30:42.282: INFO: Pod pod-projected-configmaps-cb1de3de-2189-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:30:42.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-cwt66" for this suite.
Dec 18 11:30:48.345: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:30:48.832: INFO: namespace: e2e-tests-projected-cwt66, resource: bindings, ignored listing per whitelist
Dec 18 11:30:48.938: INFO: namespace e2e-tests-projected-cwt66 deletion completed in 6.649013662s

• [SLOW TEST:20.116 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:30:48.940: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 18 11:30:49.289: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Dec 18 11:30:54.309: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 18 11:31:00.333: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 18 11:31:00.411: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-v4sj6,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-v4sj6/deployments/test-cleanup-deployment,UID:ddcb3d65-2189-11ea-a994-fa163e34d433,ResourceVersion:15226411,Generation:1,CreationTimestamp:2019-12-18 11:31:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Dec 18 11:31:00.418: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:31:00.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-v4sj6" for this suite.
Dec 18 11:31:08.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:31:09.053: INFO: namespace: e2e-tests-deployment-v4sj6, resource: bindings, ignored listing per whitelist
Dec 18 11:31:09.143: INFO: namespace e2e-tests-deployment-v4sj6 deletion completed in 8.567285778s

• [SLOW TEST:20.203 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:31:09.144: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-e3a05a7b-2189-11ea-ad77-0242ac110004
STEP: Creating a pod to test consume secrets
Dec 18 11:31:10.679: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e3cfe6ee-2189-11ea-ad77-0242ac110004" in namespace "e2e-tests-projected-p6cjn" to be "success or failure"
Dec 18 11:31:10.708: INFO: Pod "pod-projected-secrets-e3cfe6ee-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 28.8699ms
Dec 18 11:31:13.129: INFO: Pod "pod-projected-secrets-e3cfe6ee-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.449550948s
Dec 18 11:31:15.160: INFO: Pod "pod-projected-secrets-e3cfe6ee-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.480674207s
Dec 18 11:31:17.528: INFO: Pod "pod-projected-secrets-e3cfe6ee-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.848978718s
Dec 18 11:31:19.556: INFO: Pod "pod-projected-secrets-e3cfe6ee-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.876385607s
Dec 18 11:31:21.574: INFO: Pod "pod-projected-secrets-e3cfe6ee-2189-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.894082721s
STEP: Saw pod success
Dec 18 11:31:21.574: INFO: Pod "pod-projected-secrets-e3cfe6ee-2189-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 11:31:21.584: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-e3cfe6ee-2189-11ea-ad77-0242ac110004 container projected-secret-volume-test: 
STEP: delete the pod
Dec 18 11:31:22.663: INFO: Waiting for pod pod-projected-secrets-e3cfe6ee-2189-11ea-ad77-0242ac110004 to disappear
Dec 18 11:31:22.678: INFO: Pod pod-projected-secrets-e3cfe6ee-2189-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:31:22.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-p6cjn" for this suite.
Dec 18 11:31:28.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:31:28.937: INFO: namespace: e2e-tests-projected-p6cjn, resource: bindings, ignored listing per whitelist
Dec 18 11:31:29.118: INFO: namespace e2e-tests-projected-p6cjn deletion completed in 6.429187811s

• [SLOW TEST:19.974 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:31:29.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 18 11:31:29.372: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ef143759-2189-11ea-ad77-0242ac110004" in namespace "e2e-tests-projected-hpmwg" to be "success or failure"
Dec 18 11:31:29.402: INFO: Pod "downwardapi-volume-ef143759-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 30.743197ms
Dec 18 11:31:31.421: INFO: Pod "downwardapi-volume-ef143759-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048948184s
Dec 18 11:31:33.435: INFO: Pod "downwardapi-volume-ef143759-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063635756s
Dec 18 11:31:35.941: INFO: Pod "downwardapi-volume-ef143759-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.569298715s
Dec 18 11:31:37.954: INFO: Pod "downwardapi-volume-ef143759-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.582706986s
Dec 18 11:31:39.972: INFO: Pod "downwardapi-volume-ef143759-2189-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.599808142s
STEP: Saw pod success
Dec 18 11:31:39.972: INFO: Pod "downwardapi-volume-ef143759-2189-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 11:31:39.984: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-ef143759-2189-11ea-ad77-0242ac110004 container client-container: 
STEP: delete the pod
Dec 18 11:31:40.199: INFO: Waiting for pod downwardapi-volume-ef143759-2189-11ea-ad77-0242ac110004 to disappear
Dec 18 11:31:40.205: INFO: Pod downwardapi-volume-ef143759-2189-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:31:40.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-hpmwg" for this suite.
Dec 18 11:31:46.447: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:31:46.598: INFO: namespace: e2e-tests-projected-hpmwg, resource: bindings, ignored listing per whitelist
Dec 18 11:31:46.657: INFO: namespace e2e-tests-projected-hpmwg deletion completed in 6.444479399s

• [SLOW TEST:17.540 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:31:46.658: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-f98d8b13-2189-11ea-ad77-0242ac110004
STEP: Creating a pod to test consume configMaps
Dec 18 11:31:47.061: INFO: Waiting up to 5m0s for pod "pod-configmaps-f99c5d40-2189-11ea-ad77-0242ac110004" in namespace "e2e-tests-configmap-fp62t" to be "success or failure"
Dec 18 11:31:47.082: INFO: Pod "pod-configmaps-f99c5d40-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 20.823423ms
Dec 18 11:31:49.102: INFO: Pod "pod-configmaps-f99c5d40-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040358473s
Dec 18 11:31:51.136: INFO: Pod "pod-configmaps-f99c5d40-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074326728s
Dec 18 11:31:53.382: INFO: Pod "pod-configmaps-f99c5d40-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.321015347s
Dec 18 11:31:55.394: INFO: Pod "pod-configmaps-f99c5d40-2189-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.332397937s
Dec 18 11:31:57.407: INFO: Pod "pod-configmaps-f99c5d40-2189-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.345546633s
STEP: Saw pod success
Dec 18 11:31:57.407: INFO: Pod "pod-configmaps-f99c5d40-2189-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 11:31:57.409: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-f99c5d40-2189-11ea-ad77-0242ac110004 container configmap-volume-test: 
STEP: delete the pod
Dec 18 11:31:57.495: INFO: Waiting for pod pod-configmaps-f99c5d40-2189-11ea-ad77-0242ac110004 to disappear
Dec 18 11:31:57.504: INFO: Pod pod-configmaps-f99c5d40-2189-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:31:57.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-fp62t" for this suite.
Dec 18 11:32:03.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:32:03.700: INFO: namespace: e2e-tests-configmap-fp62t, resource: bindings, ignored listing per whitelist
Dec 18 11:32:04.001: INFO: namespace e2e-tests-configmap-fp62t deletion completed in 6.490742624s

• [SLOW TEST:17.344 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:32:04.003: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 18 11:32:04.285: INFO: Waiting up to 5m0s for pod "pod-03e44806-218a-11ea-ad77-0242ac110004" in namespace "e2e-tests-emptydir-fxzjj" to be "success or failure"
Dec 18 11:32:04.291: INFO: Pod "pod-03e44806-218a-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116843ms
Dec 18 11:32:06.307: INFO: Pod "pod-03e44806-218a-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02159619s
Dec 18 11:32:08.338: INFO: Pod "pod-03e44806-218a-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053280196s
Dec 18 11:32:10.589: INFO: Pod "pod-03e44806-218a-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.303747794s
Dec 18 11:32:12.644: INFO: Pod "pod-03e44806-218a-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.3587128s
Dec 18 11:32:14.672: INFO: Pod "pod-03e44806-218a-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.387324373s
STEP: Saw pod success
Dec 18 11:32:14.673: INFO: Pod "pod-03e44806-218a-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 11:32:14.678: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-03e44806-218a-11ea-ad77-0242ac110004 container test-container: 
STEP: delete the pod
Dec 18 11:32:14.979: INFO: Waiting for pod pod-03e44806-218a-11ea-ad77-0242ac110004 to disappear
Dec 18 11:32:15.170: INFO: Pod pod-03e44806-218a-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:32:15.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-fxzjj" for this suite.
Dec 18 11:32:21.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:32:21.417: INFO: namespace: e2e-tests-emptydir-fxzjj, resource: bindings, ignored listing per whitelist
Dec 18 11:32:21.431: INFO: namespace e2e-tests-emptydir-fxzjj deletion completed in 6.253724822s

• [SLOW TEST:17.429 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:32:21.432: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Dec 18 11:32:31.868: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-0e442f30-218a-11ea-ad77-0242ac110004", GenerateName:"", Namespace:"e2e-tests-pods-bghs5", SelfLink:"/api/v1/namespaces/e2e-tests-pods-bghs5/pods/pod-submit-remove-0e442f30-218a-11ea-ad77-0242ac110004", UID:"0e4672b8-218a-11ea-a994-fa163e34d433", ResourceVersion:"15226651", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63712265541, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"677470589"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-cntcv", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00241c500), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-cntcv", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0015478d8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001d83e00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001547910)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001547930)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001547938), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00154793c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712265541, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712265551, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712265551, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712265541, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc0017a3c20), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0017a3c40), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://5f1a60a34dfa447c5c72cbed710fbd9442096ebd628afc6c1dbb3fa63688555f"}}, QOSClass:"BestEffort"}}
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:32:42.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-bghs5" for this suite.
Dec 18 11:32:48.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:32:48.809: INFO: namespace: e2e-tests-pods-bghs5, resource: bindings, ignored listing per whitelist
Dec 18 11:32:48.835: INFO: namespace e2e-tests-pods-bghs5 deletion completed in 6.163466494s

• [SLOW TEST:27.404 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:32:48.836: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:32:49.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-qpsjz" for this suite.
Dec 18 11:33:13.489: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:33:13.717: INFO: namespace: e2e-tests-pods-qpsjz, resource: bindings, ignored listing per whitelist
Dec 18 11:33:13.785: INFO: namespace e2e-tests-pods-qpsjz deletion completed in 24.516231451s

• [SLOW TEST:24.950 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:33:13.786: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Dec 18 11:33:14.230: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:33:35.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-jfsj4" for this suite.
Dec 18 11:34:01.726: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:34:01.927: INFO: namespace: e2e-tests-init-container-jfsj4, resource: bindings, ignored listing per whitelist
Dec 18 11:34:01.945: INFO: namespace e2e-tests-init-container-jfsj4 deletion completed in 26.255269142s

• [SLOW TEST:48.159 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:34:01.946: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Dec 18 11:34:02.221: INFO: Waiting up to 5m0s for pod "client-containers-4a2e068c-218a-11ea-ad77-0242ac110004" in namespace "e2e-tests-containers-slzdj" to be "success or failure"
Dec 18 11:34:02.244: INFO: Pod "client-containers-4a2e068c-218a-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 22.811019ms
Dec 18 11:34:04.378: INFO: Pod "client-containers-4a2e068c-218a-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15684957s
Dec 18 11:34:06.402: INFO: Pod "client-containers-4a2e068c-218a-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.180410862s
Dec 18 11:34:08.913: INFO: Pod "client-containers-4a2e068c-218a-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.691930173s
Dec 18 11:34:10.969: INFO: Pod "client-containers-4a2e068c-218a-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.748007746s
Dec 18 11:34:13.288: INFO: Pod "client-containers-4a2e068c-218a-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.066138281s
Dec 18 11:34:15.304: INFO: Pod "client-containers-4a2e068c-218a-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.082177933s
STEP: Saw pod success
Dec 18 11:34:15.304: INFO: Pod "client-containers-4a2e068c-218a-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 11:34:15.309: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-4a2e068c-218a-11ea-ad77-0242ac110004 container test-container: 
STEP: delete the pod
Dec 18 11:34:16.402: INFO: Waiting for pod client-containers-4a2e068c-218a-11ea-ad77-0242ac110004 to disappear
Dec 18 11:34:16.737: INFO: Pod client-containers-4a2e068c-218a-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:34:16.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-slzdj" for this suite.
Dec 18 11:34:23.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:34:23.038: INFO: namespace: e2e-tests-containers-slzdj, resource: bindings, ignored listing per whitelist
Dec 18 11:34:23.228: INFO: namespace e2e-tests-containers-slzdj deletion completed in 6.474282353s

• [SLOW TEST:21.283 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:34:23.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 18 11:34:23.457: INFO: Creating ReplicaSet my-hostname-basic-56da5dca-218a-11ea-ad77-0242ac110004
Dec 18 11:34:23.522: INFO: Pod name my-hostname-basic-56da5dca-218a-11ea-ad77-0242ac110004: Found 0 pods out of 1
Dec 18 11:34:28.575: INFO: Pod name my-hostname-basic-56da5dca-218a-11ea-ad77-0242ac110004: Found 1 pods out of 1
Dec 18 11:34:28.575: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-56da5dca-218a-11ea-ad77-0242ac110004" is running
Dec 18 11:34:34.651: INFO: Pod "my-hostname-basic-56da5dca-218a-11ea-ad77-0242ac110004-nptvr" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-18 11:34:23 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-18 11:34:23 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-56da5dca-218a-11ea-ad77-0242ac110004]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-18 11:34:23 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-56da5dca-218a-11ea-ad77-0242ac110004]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-18 11:34:23 +0000 UTC Reason: Message:}])
Dec 18 11:34:34.652: INFO: Trying to dial the pod
Dec 18 11:34:39.690: INFO: Controller my-hostname-basic-56da5dca-218a-11ea-ad77-0242ac110004: Got expected result from replica 1 [my-hostname-basic-56da5dca-218a-11ea-ad77-0242ac110004-nptvr]: "my-hostname-basic-56da5dca-218a-11ea-ad77-0242ac110004-nptvr", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:34:39.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-sq889" for this suite.
Dec 18 11:34:48.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:34:48.493: INFO: namespace: e2e-tests-replicaset-sq889, resource: bindings, ignored listing per whitelist
Dec 18 11:34:48.595: INFO: namespace e2e-tests-replicaset-sq889 deletion completed in 8.89503266s

• [SLOW TEST:25.367 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:34:48.598: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Dec 18 11:35:01.917: INFO: Successfully updated pod "labelsupdate662c6c85-218a-11ea-ad77-0242ac110004"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:35:04.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-cwctf" for this suite.
Dec 18 11:35:28.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:35:28.234: INFO: namespace: e2e-tests-projected-cwctf, resource: bindings, ignored listing per whitelist
Dec 18 11:35:28.335: INFO: namespace e2e-tests-projected-cwctf deletion completed in 24.161946388s

• [SLOW TEST:39.737 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:35:28.335: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Dec 18 11:38:35.193: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:38:35.351: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:38:37.352: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:38:37.385: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:38:39.353: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:38:39.385: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:38:41.352: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:38:41.367: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:38:43.352: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:38:43.415: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:38:45.352: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:38:45.389: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:38:47.352: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:38:47.431: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:38:49.352: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:38:49.394: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:38:51.353: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:38:51.395: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:38:53.352: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:38:53.377: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:38:55.352: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:38:55.379: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:38:57.352: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:38:57.383: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:38:59.352: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:38:59.378: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:39:01.352: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:39:01.377: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:39:03.352: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:39:03.391: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:39:05.352: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:39:05.378: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:39:07.352: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:39:07.389: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:39:09.352: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:39:09.391: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:39:11.352: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:39:11.377: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:39:13.351: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:39:13.384: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:39:15.352: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:39:15.374: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:39:17.352: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:39:17.425: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:39:19.352: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:39:19.385: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:39:21.351: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:39:21.370: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:39:23.351: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:39:23.364: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:39:25.352: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:39:25.373: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:39:27.351: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:39:27.375: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:39:29.351: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:39:29.421: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:39:31.352: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:39:31.375: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:39:33.352: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:39:33.392: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:39:35.351: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:39:35.375: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:39:37.352: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:39:37.371: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:39:39.352: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:39:39.372: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:39:41.352: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:39:41.369: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:39:43.352: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:39:43.374: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:39:45.352: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:39:45.372: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:39:47.352: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:39:47.370: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:39:49.352: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:39:49.376: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:39:51.352: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:39:51.373: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:39:53.352: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:39:53.371: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:39:55.352: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:39:55.370: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:39:57.352: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:39:57.377: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:39:59.352: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:39:59.376: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:40:01.352: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:40:01.397: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:40:03.352: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:40:03.373: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:40:05.352: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:40:05.372: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:40:07.352: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:40:07.388: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:40:09.352: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:40:09.394: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:40:11.352: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:40:11.383: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:40:13.352: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:40:13.379: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:40:15.352: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:40:15.399: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:40:17.352: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:40:17.472: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:40:19.352: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:40:19.382: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:40:21.352: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:40:21.373: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 11:40:23.352: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 11:40:23.393: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:40:23.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-vmqpn" for this suite.
Dec 18 11:40:47.489: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:40:47.667: INFO: namespace: e2e-tests-container-lifecycle-hook-vmqpn, resource: bindings, ignored listing per whitelist
Dec 18 11:40:47.737: INFO: namespace e2e-tests-container-lifecycle-hook-vmqpn deletion completed in 24.325147647s

• [SLOW TEST:319.401 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:40:47.737: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 18 11:40:47.947: INFO: Waiting up to 5m0s for pod "pod-3bf72555-218b-11ea-ad77-0242ac110004" in namespace "e2e-tests-emptydir-8zpf5" to be "success or failure"
Dec 18 11:40:47.960: INFO: Pod "pod-3bf72555-218b-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 13.272486ms
Dec 18 11:40:49.982: INFO: Pod "pod-3bf72555-218b-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035676527s
Dec 18 11:40:52.012: INFO: Pod "pod-3bf72555-218b-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065399556s
Dec 18 11:40:54.442: INFO: Pod "pod-3bf72555-218b-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.494930707s
Dec 18 11:40:56.461: INFO: Pod "pod-3bf72555-218b-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.514191172s
Dec 18 11:40:58.480: INFO: Pod "pod-3bf72555-218b-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.533346396s
STEP: Saw pod success
Dec 18 11:40:58.480: INFO: Pod "pod-3bf72555-218b-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 11:40:58.493: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-3bf72555-218b-11ea-ad77-0242ac110004 container test-container: 
STEP: delete the pod
Dec 18 11:40:58.765: INFO: Waiting for pod pod-3bf72555-218b-11ea-ad77-0242ac110004 to disappear
Dec 18 11:40:59.861: INFO: Pod pod-3bf72555-218b-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:40:59.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-8zpf5" for this suite.
Dec 18 11:41:06.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:41:06.259: INFO: namespace: e2e-tests-emptydir-8zpf5, resource: bindings, ignored listing per whitelist
Dec 18 11:41:06.356: INFO: namespace e2e-tests-emptydir-8zpf5 deletion completed in 6.468919393s

• [SLOW TEST:18.619 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:41:06.356: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-47305f09-218b-11ea-ad77-0242ac110004
STEP: Creating a pod to test consume secrets
Dec 18 11:41:06.818: INFO: Waiting up to 5m0s for pod "pod-secrets-4733f759-218b-11ea-ad77-0242ac110004" in namespace "e2e-tests-secrets-kd2jj" to be "success or failure"
Dec 18 11:41:06.840: INFO: Pod "pod-secrets-4733f759-218b-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 21.349789ms
Dec 18 11:41:08.854: INFO: Pod "pod-secrets-4733f759-218b-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036097863s
Dec 18 11:41:10.889: INFO: Pod "pod-secrets-4733f759-218b-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070949011s
Dec 18 11:41:12.906: INFO: Pod "pod-secrets-4733f759-218b-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08773113s
Dec 18 11:41:14.917: INFO: Pod "pod-secrets-4733f759-218b-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.099214905s
Dec 18 11:41:16.938: INFO: Pod "pod-secrets-4733f759-218b-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.119537244s
STEP: Saw pod success
Dec 18 11:41:16.938: INFO: Pod "pod-secrets-4733f759-218b-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 11:41:16.946: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-4733f759-218b-11ea-ad77-0242ac110004 container secret-volume-test: 
STEP: delete the pod
Dec 18 11:41:17.154: INFO: Waiting for pod pod-secrets-4733f759-218b-11ea-ad77-0242ac110004 to disappear
Dec 18 11:41:17.218: INFO: Pod pod-secrets-4733f759-218b-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:41:17.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-kd2jj" for this suite.
Dec 18 11:41:23.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:41:23.555: INFO: namespace: e2e-tests-secrets-kd2jj, resource: bindings, ignored listing per whitelist
Dec 18 11:41:23.575: INFO: namespace e2e-tests-secrets-kd2jj deletion completed in 6.341159726s

• [SLOW TEST:17.219 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:41:23.575: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 18 11:41:24.053: INFO: Waiting up to 5m0s for pod "downward-api-516c1a57-218b-11ea-ad77-0242ac110004" in namespace "e2e-tests-downward-api-wjx42" to be "success or failure"
Dec 18 11:41:24.083: INFO: Pod "downward-api-516c1a57-218b-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 28.955586ms
Dec 18 11:41:26.172: INFO: Pod "downward-api-516c1a57-218b-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11788947s
Dec 18 11:41:28.189: INFO: Pod "downward-api-516c1a57-218b-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.135182716s
Dec 18 11:41:30.487: INFO: Pod "downward-api-516c1a57-218b-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.433655355s
Dec 18 11:41:32.521: INFO: Pod "downward-api-516c1a57-218b-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.467539467s
Dec 18 11:41:34.550: INFO: Pod "downward-api-516c1a57-218b-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.496716432s
STEP: Saw pod success
Dec 18 11:41:34.551: INFO: Pod "downward-api-516c1a57-218b-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 11:41:34.569: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-516c1a57-218b-11ea-ad77-0242ac110004 container dapi-container: 
STEP: delete the pod
Dec 18 11:41:34.804: INFO: Waiting for pod downward-api-516c1a57-218b-11ea-ad77-0242ac110004 to disappear
Dec 18 11:41:34.809: INFO: Pod downward-api-516c1a57-218b-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:41:34.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-wjx42" for this suite.
Dec 18 11:41:40.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:41:41.136: INFO: namespace: e2e-tests-downward-api-wjx42, resource: bindings, ignored listing per whitelist
Dec 18 11:41:41.155: INFO: namespace e2e-tests-downward-api-wjx42 deletion completed in 6.331253407s

• [SLOW TEST:17.580 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:41:41.156: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Dec 18 11:41:41.403: INFO: namespace e2e-tests-kubectl-7z4ll
Dec 18 11:41:41.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-7z4ll'
Dec 18 11:41:43.880: INFO: stderr: ""
Dec 18 11:41:43.881: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 18 11:41:45.562: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 11:41:45.563: INFO: Found 0 / 1
Dec 18 11:41:46.023: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 11:41:46.023: INFO: Found 0 / 1
Dec 18 11:41:46.902: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 11:41:46.902: INFO: Found 0 / 1
Dec 18 11:41:47.917: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 11:41:47.917: INFO: Found 0 / 1
Dec 18 11:41:49.895: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 11:41:49.895: INFO: Found 0 / 1
Dec 18 11:41:50.898: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 11:41:50.898: INFO: Found 0 / 1
Dec 18 11:41:51.936: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 11:41:51.936: INFO: Found 0 / 1
Dec 18 11:41:52.901: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 11:41:52.901: INFO: Found 0 / 1
Dec 18 11:41:53.916: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 11:41:53.916: INFO: Found 1 / 1
Dec 18 11:41:53.916: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 18 11:41:53.929: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 11:41:53.929: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 18 11:41:53.929: INFO: wait on redis-master startup in e2e-tests-kubectl-7z4ll 
Dec 18 11:41:53.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-27ns7 redis-master --namespace=e2e-tests-kubectl-7z4ll'
Dec 18 11:41:54.258: INFO: stderr: ""
Dec 18 11:41:54.258: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 18 Dec 11:41:52.224 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 18 Dec 11:41:52.225 # Server started, Redis version 3.2.12\n1:M 18 Dec 11:41:52.225 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 18 Dec 11:41:52.225 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Dec 18 11:41:54.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-7z4ll'
Dec 18 11:41:54.443: INFO: stderr: ""
Dec 18 11:41:54.443: INFO: stdout: "service/rm2 exposed\n"
Dec 18 11:41:54.457: INFO: Service rm2 in namespace e2e-tests-kubectl-7z4ll found.
STEP: exposing service
Dec 18 11:41:56.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-7z4ll'
Dec 18 11:41:56.823: INFO: stderr: ""
Dec 18 11:41:56.823: INFO: stdout: "service/rm3 exposed\n"
Dec 18 11:41:56.845: INFO: Service rm3 in namespace e2e-tests-kubectl-7z4ll found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:41:58.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-7z4ll" for this suite.
Dec 18 11:42:25.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:42:25.287: INFO: namespace: e2e-tests-kubectl-7z4ll, resource: bindings, ignored listing per whitelist
Dec 18 11:42:25.287: INFO: namespace e2e-tests-kubectl-7z4ll deletion completed in 26.254351958s

• [SLOW TEST:44.132 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:42:25.288: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-jsmtv
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-jsmtv to expose endpoints map[]
Dec 18 11:42:25.754: INFO: Get endpoints failed (10.883191ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Dec 18 11:42:26.767: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-jsmtv exposes endpoints map[] (1.023691738s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-jsmtv
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-jsmtv to expose endpoints map[pod1:[80]]
Dec 18 11:42:33.653: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (6.873335193s elapsed, will retry)
Dec 18 11:42:40.267: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-jsmtv exposes endpoints map[pod1:[80]] (13.487358213s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-jsmtv
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-jsmtv to expose endpoints map[pod1:[80] pod2:[80]]
Dec 18 11:42:47.605: INFO: Unexpected endpoints: found map[76ee3459-218b-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (7.327675505s elapsed, will retry)
Dec 18 11:42:52.315: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-jsmtv exposes endpoints map[pod1:[80] pod2:[80]] (12.037739015s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-jsmtv
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-jsmtv to expose endpoints map[pod2:[80]]
Dec 18 11:42:53.655: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-jsmtv exposes endpoints map[pod2:[80]] (1.318347761s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-jsmtv
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-jsmtv to expose endpoints map[]
Dec 18 11:42:53.806: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-jsmtv exposes endpoints map[] (108.543406ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:42:54.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-jsmtv" for this suite.
Dec 18 11:43:18.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:43:18.650: INFO: namespace: e2e-tests-services-jsmtv, resource: bindings, ignored listing per whitelist
Dec 18 11:43:18.672: INFO: namespace e2e-tests-services-jsmtv deletion completed in 24.555277761s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:53.384 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:43:18.672: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-96127033-218b-11ea-ad77-0242ac110004
STEP: Creating a pod to test consume secrets
Dec 18 11:43:19.061: INFO: Waiting up to 5m0s for pod "pod-secrets-9614b163-218b-11ea-ad77-0242ac110004" in namespace "e2e-tests-secrets-4k9t5" to be "success or failure"
Dec 18 11:43:19.104: INFO: Pod "pod-secrets-9614b163-218b-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 42.096366ms
Dec 18 11:43:21.276: INFO: Pod "pod-secrets-9614b163-218b-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214154829s
Dec 18 11:43:23.287: INFO: Pod "pod-secrets-9614b163-218b-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.224906446s
Dec 18 11:43:25.444: INFO: Pod "pod-secrets-9614b163-218b-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.382479853s
Dec 18 11:43:27.481: INFO: Pod "pod-secrets-9614b163-218b-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.41971125s
Dec 18 11:43:29.502: INFO: Pod "pod-secrets-9614b163-218b-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.44039404s
Dec 18 11:43:31.520: INFO: Pod "pod-secrets-9614b163-218b-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.458733341s
STEP: Saw pod success
Dec 18 11:43:31.521: INFO: Pod "pod-secrets-9614b163-218b-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 11:43:31.527: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-9614b163-218b-11ea-ad77-0242ac110004 container secret-volume-test: 
STEP: delete the pod
Dec 18 11:43:32.413: INFO: Waiting for pod pod-secrets-9614b163-218b-11ea-ad77-0242ac110004 to disappear
Dec 18 11:43:32.448: INFO: Pod pod-secrets-9614b163-218b-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:43:32.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-4k9t5" for this suite.
Dec 18 11:43:38.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:43:38.694: INFO: namespace: e2e-tests-secrets-4k9t5, resource: bindings, ignored listing per whitelist
Dec 18 11:43:38.712: INFO: namespace e2e-tests-secrets-4k9t5 deletion completed in 6.250491665s

• [SLOW TEST:20.040 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:43:38.712: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 18 11:43:39.014: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a1fbae79-218b-11ea-ad77-0242ac110004" in namespace "e2e-tests-projected-mpxzt" to be "success or failure"
Dec 18 11:43:39.033: INFO: Pod "downwardapi-volume-a1fbae79-218b-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 19.309104ms
Dec 18 11:43:41.058: INFO: Pod "downwardapi-volume-a1fbae79-218b-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04411444s
Dec 18 11:43:43.078: INFO: Pod "downwardapi-volume-a1fbae79-218b-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064696399s
Dec 18 11:43:45.099: INFO: Pod "downwardapi-volume-a1fbae79-218b-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.084948632s
Dec 18 11:43:47.128: INFO: Pod "downwardapi-volume-a1fbae79-218b-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.114184883s
Dec 18 11:43:49.175: INFO: Pod "downwardapi-volume-a1fbae79-218b-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.161639358s
Dec 18 11:43:51.199: INFO: Pod "downwardapi-volume-a1fbae79-218b-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.18582639s
STEP: Saw pod success
Dec 18 11:43:51.200: INFO: Pod "downwardapi-volume-a1fbae79-218b-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 11:43:51.206: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-a1fbae79-218b-11ea-ad77-0242ac110004 container client-container: 
STEP: delete the pod
Dec 18 11:43:51.283: INFO: Waiting for pod downwardapi-volume-a1fbae79-218b-11ea-ad77-0242ac110004 to disappear
Dec 18 11:43:51.369: INFO: Pod downwardapi-volume-a1fbae79-218b-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:43:51.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-mpxzt" for this suite.
Dec 18 11:43:57.470: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:43:57.521: INFO: namespace: e2e-tests-projected-mpxzt, resource: bindings, ignored listing per whitelist
Dec 18 11:43:57.816: INFO: namespace e2e-tests-projected-mpxzt deletion completed in 6.434414487s

• [SLOW TEST:19.104 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:43:57.816: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:44:08.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-ggnjx" for this suite.
Dec 18 11:44:50.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:44:50.823: INFO: namespace: e2e-tests-kubelet-test-ggnjx, resource: bindings, ignored listing per whitelist
Dec 18 11:44:50.853: INFO: namespace e2e-tests-kubelet-test-ggnjx deletion completed in 42.411601034s

• [SLOW TEST:53.037 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:44:50.854: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 18 11:44:51.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client'
Dec 18 11:44:51.165: INFO: stderr: ""
Dec 18 11:44:51.165: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-14T21:37:42Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
Dec 18 11:44:51.198: INFO: Not supported for server versions before "1.13.12"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:44:51.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-zzs9k" for this suite.
Dec 18 11:44:57.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:44:57.498: INFO: namespace: e2e-tests-kubectl-zzs9k, resource: bindings, ignored listing per whitelist
Dec 18 11:44:57.540: INFO: namespace e2e-tests-kubectl-zzs9k deletion completed in 6.332960459s

S [SKIPPING] [6.687 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if kubectl describe prints relevant information for rc and pods  [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

    Dec 18 11:44:51.198: Not supported for server versions before "1.13.12"

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:44:57.541: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Dec 18 11:45:28.346: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-44vp5 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 18 11:45:28.346: INFO: >>> kubeConfig: /root/.kube/config
Dec 18 11:45:29.105: INFO: Exec stderr: ""
Dec 18 11:45:29.105: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-44vp5 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 18 11:45:29.105: INFO: >>> kubeConfig: /root/.kube/config
Dec 18 11:45:29.478: INFO: Exec stderr: ""
Dec 18 11:45:29.478: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-44vp5 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 18 11:45:29.479: INFO: >>> kubeConfig: /root/.kube/config
Dec 18 11:45:29.861: INFO: Exec stderr: ""
Dec 18 11:45:29.861: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-44vp5 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 18 11:45:29.861: INFO: >>> kubeConfig: /root/.kube/config
Dec 18 11:45:30.201: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Dec 18 11:45:30.201: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-44vp5 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 18 11:45:30.201: INFO: >>> kubeConfig: /root/.kube/config
Dec 18 11:45:30.533: INFO: Exec stderr: ""
Dec 18 11:45:30.534: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-44vp5 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 18 11:45:30.534: INFO: >>> kubeConfig: /root/.kube/config
Dec 18 11:45:30.910: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Dec 18 11:45:30.911: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-44vp5 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 18 11:45:30.911: INFO: >>> kubeConfig: /root/.kube/config
Dec 18 11:45:31.204: INFO: Exec stderr: ""
Dec 18 11:45:31.205: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-44vp5 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 18 11:45:31.205: INFO: >>> kubeConfig: /root/.kube/config
Dec 18 11:45:31.597: INFO: Exec stderr: ""
Dec 18 11:45:31.597: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-44vp5 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 18 11:45:31.598: INFO: >>> kubeConfig: /root/.kube/config
Dec 18 11:45:31.977: INFO: Exec stderr: ""
Dec 18 11:45:31.978: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-44vp5 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 18 11:45:31.978: INFO: >>> kubeConfig: /root/.kube/config
Dec 18 11:45:32.696: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:45:32.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-44vp5" for this suite.
Dec 18 11:46:28.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:46:29.120: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-44vp5, resource: bindings, ignored listing per whitelist
Dec 18 11:46:29.258: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-44vp5 deletion completed in 56.424387084s

• [SLOW TEST:91.717 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:46:29.258: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 18 11:46:29.428: INFO: Waiting up to 5m0s for pod "downwardapi-volume-078ed41c-218c-11ea-ad77-0242ac110004" in namespace "e2e-tests-downward-api-wqg8h" to be "success or failure"
Dec 18 11:46:29.493: INFO: Pod "downwardapi-volume-078ed41c-218c-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 64.740978ms
Dec 18 11:46:31.771: INFO: Pod "downwardapi-volume-078ed41c-218c-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.342371725s
Dec 18 11:46:33.798: INFO: Pod "downwardapi-volume-078ed41c-218c-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.369884326s
Dec 18 11:46:36.922: INFO: Pod "downwardapi-volume-078ed41c-218c-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.493540724s
Dec 18 11:46:38.969: INFO: Pod "downwardapi-volume-078ed41c-218c-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.540575258s
Dec 18 11:46:40.990: INFO: Pod "downwardapi-volume-078ed41c-218c-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.561107139s
STEP: Saw pod success
Dec 18 11:46:40.990: INFO: Pod "downwardapi-volume-078ed41c-218c-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 11:46:40.995: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-078ed41c-218c-11ea-ad77-0242ac110004 container client-container: 
STEP: delete the pod
Dec 18 11:46:41.180: INFO: Waiting for pod downwardapi-volume-078ed41c-218c-11ea-ad77-0242ac110004 to disappear
Dec 18 11:46:41.192: INFO: Pod downwardapi-volume-078ed41c-218c-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:46:41.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-wqg8h" for this suite.
Dec 18 11:46:47.304: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:46:47.340: INFO: namespace: e2e-tests-downward-api-wqg8h, resource: bindings, ignored listing per whitelist
Dec 18 11:46:47.524: INFO: namespace e2e-tests-downward-api-wqg8h deletion completed in 6.259438848s

• [SLOW TEST:18.266 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:46:47.524: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 18 11:46:47.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-vlz2f'
Dec 18 11:46:47.961: INFO: stderr: ""
Dec 18 11:46:47.961: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532
Dec 18 11:46:48.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-vlz2f'
Dec 18 11:46:52.607: INFO: stderr: ""
Dec 18 11:46:52.607: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:46:52.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-vlz2f" for this suite.
Dec 18 11:46:58.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:46:58.765: INFO: namespace: e2e-tests-kubectl-vlz2f, resource: bindings, ignored listing per whitelist
Dec 18 11:46:58.890: INFO: namespace e2e-tests-kubectl-vlz2f deletion completed in 6.267201149s

• [SLOW TEST:11.366 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:46:58.892: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-drhqb
Dec 18 11:47:09.193: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-drhqb
STEP: checking the pod's current state and verifying that restartCount is present
Dec 18 11:47:09.196: INFO: Initial restart count of pod liveness-exec is 0
Dec 18 11:48:07.076: INFO: Restart count of pod e2e-tests-container-probe-drhqb/liveness-exec is now 1 (57.87922134s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:48:07.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-drhqb" for this suite.
Dec 18 11:48:15.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:48:18.008: INFO: namespace: e2e-tests-container-probe-drhqb, resource: bindings, ignored listing per whitelist
Dec 18 11:48:18.081: INFO: namespace e2e-tests-container-probe-drhqb deletion completed in 10.835258454s

• [SLOW TEST:79.190 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:48:18.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-mbxlg
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StatefulSet
Dec 18 11:48:18.317: INFO: Found 0 stateful pods, waiting for 3
Dec 18 11:48:28.326: INFO: Found 1 stateful pods, waiting for 3
Dec 18 11:48:38.352: INFO: Found 2 stateful pods, waiting for 3
Dec 18 11:48:48.364: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 18 11:48:48.364: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 18 11:48:48.365: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 18 11:48:58.469: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 18 11:48:58.469: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 18 11:48:58.469: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Dec 18 11:48:58.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mbxlg ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 18 11:48:59.260: INFO: stderr: ""
Dec 18 11:48:59.261: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 18 11:48:59.261: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Dec 18 11:49:09.362: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Dec 18 11:49:19.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mbxlg ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 11:49:21.037: INFO: stderr: ""
Dec 18 11:49:21.037: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 18 11:49:21.037: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 18 11:49:31.134: INFO: Waiting for StatefulSet e2e-tests-statefulset-mbxlg/ss2 to complete update
Dec 18 11:49:31.134: INFO: Waiting for Pod e2e-tests-statefulset-mbxlg/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 18 11:49:31.134: INFO: Waiting for Pod e2e-tests-statefulset-mbxlg/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 18 11:49:31.134: INFO: Waiting for Pod e2e-tests-statefulset-mbxlg/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 18 11:49:41.163: INFO: Waiting for StatefulSet e2e-tests-statefulset-mbxlg/ss2 to complete update
Dec 18 11:49:41.164: INFO: Waiting for Pod e2e-tests-statefulset-mbxlg/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 18 11:49:41.164: INFO: Waiting for Pod e2e-tests-statefulset-mbxlg/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 18 11:49:51.473: INFO: Waiting for StatefulSet e2e-tests-statefulset-mbxlg/ss2 to complete update
Dec 18 11:49:51.473: INFO: Waiting for Pod e2e-tests-statefulset-mbxlg/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 18 11:50:01.182: INFO: Waiting for StatefulSet e2e-tests-statefulset-mbxlg/ss2 to complete update
Dec 18 11:50:01.182: INFO: Waiting for Pod e2e-tests-statefulset-mbxlg/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 18 11:50:11.214: INFO: Waiting for StatefulSet e2e-tests-statefulset-mbxlg/ss2 to complete update
Dec 18 11:50:11.214: INFO: Waiting for Pod e2e-tests-statefulset-mbxlg/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 18 11:50:21.173: INFO: Waiting for StatefulSet e2e-tests-statefulset-mbxlg/ss2 to complete update
STEP: Rolling back to a previous revision
Dec 18 11:50:31.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mbxlg ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 18 11:50:32.057: INFO: stderr: ""
Dec 18 11:50:32.058: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 18 11:50:32.058: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 18 11:50:42.200: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Dec 18 11:50:52.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mbxlg ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 11:50:52.870: INFO: stderr: ""
Dec 18 11:50:52.871: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 18 11:50:52.871: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 18 11:51:03.444: INFO: Waiting for StatefulSet e2e-tests-statefulset-mbxlg/ss2 to complete update
Dec 18 11:51:03.444: INFO: Waiting for Pod e2e-tests-statefulset-mbxlg/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 18 11:51:03.444: INFO: Waiting for Pod e2e-tests-statefulset-mbxlg/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 18 11:51:03.444: INFO: Waiting for Pod e2e-tests-statefulset-mbxlg/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 18 11:51:13.494: INFO: Waiting for StatefulSet e2e-tests-statefulset-mbxlg/ss2 to complete update
Dec 18 11:51:13.494: INFO: Waiting for Pod e2e-tests-statefulset-mbxlg/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 18 11:51:13.494: INFO: Waiting for Pod e2e-tests-statefulset-mbxlg/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 18 11:51:23.507: INFO: Waiting for StatefulSet e2e-tests-statefulset-mbxlg/ss2 to complete update
Dec 18 11:51:23.508: INFO: Waiting for Pod e2e-tests-statefulset-mbxlg/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 18 11:51:23.508: INFO: Waiting for Pod e2e-tests-statefulset-mbxlg/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 18 11:51:33.484: INFO: Waiting for StatefulSet e2e-tests-statefulset-mbxlg/ss2 to complete update
Dec 18 11:51:33.484: INFO: Waiting for Pod e2e-tests-statefulset-mbxlg/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 18 11:51:43.518: INFO: Waiting for StatefulSet e2e-tests-statefulset-mbxlg/ss2 to complete update
Dec 18 11:51:43.518: INFO: Waiting for Pod e2e-tests-statefulset-mbxlg/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 18 11:51:53.489: INFO: Waiting for StatefulSet e2e-tests-statefulset-mbxlg/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 18 11:52:03.601: INFO: Deleting all statefulset in ns e2e-tests-statefulset-mbxlg
Dec 18 11:52:03.610: INFO: Scaling statefulset ss2 to 0
Dec 18 11:52:33.693: INFO: Waiting for statefulset status.replicas updated to 0
Dec 18 11:52:33.712: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:52:33.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-mbxlg" for this suite.
Dec 18 11:52:41.971: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:52:42.093: INFO: namespace: e2e-tests-statefulset-mbxlg, resource: bindings, ignored listing per whitelist
Dec 18 11:52:42.125: INFO: namespace e2e-tests-statefulset-mbxlg deletion completed in 8.239522074s

• [SLOW TEST:264.043 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:52:42.126: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Dec 18 11:52:42.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Dec 18 11:52:42.615: INFO: stderr: ""
Dec 18 11:52:42.615: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:52:42.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-dxf5h" for this suite.
Dec 18 11:52:48.679: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:52:48.714: INFO: namespace: e2e-tests-kubectl-dxf5h, resource: bindings, ignored listing per whitelist
Dec 18 11:52:48.803: INFO: namespace e2e-tests-kubectl-dxf5h deletion completed in 6.163844103s

• [SLOW TEST:6.678 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:52:48.804: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 18 11:52:49.069: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e9d71354-218c-11ea-ad77-0242ac110004" in namespace "e2e-tests-downward-api-qrhv4" to be "success or failure"
Dec 18 11:52:49.086: INFO: Pod "downwardapi-volume-e9d71354-218c-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 16.610581ms
Dec 18 11:52:51.386: INFO: Pod "downwardapi-volume-e9d71354-218c-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.31724717s
Dec 18 11:52:53.438: INFO: Pod "downwardapi-volume-e9d71354-218c-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.369305984s
Dec 18 11:52:55.874: INFO: Pod "downwardapi-volume-e9d71354-218c-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.804510935s
Dec 18 11:52:57.929: INFO: Pod "downwardapi-volume-e9d71354-218c-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.860226495s
Dec 18 11:52:59.944: INFO: Pod "downwardapi-volume-e9d71354-218c-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.875011803s
STEP: Saw pod success
Dec 18 11:52:59.944: INFO: Pod "downwardapi-volume-e9d71354-218c-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 11:52:59.949: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-e9d71354-218c-11ea-ad77-0242ac110004 container client-container: 
STEP: delete the pod
Dec 18 11:53:00.303: INFO: Waiting for pod downwardapi-volume-e9d71354-218c-11ea-ad77-0242ac110004 to disappear
Dec 18 11:53:00.314: INFO: Pod downwardapi-volume-e9d71354-218c-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:53:00.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-qrhv4" for this suite.
Dec 18 11:53:07.679: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:53:07.769: INFO: namespace: e2e-tests-downward-api-qrhv4, resource: bindings, ignored listing per whitelist
Dec 18 11:53:07.813: INFO: namespace e2e-tests-downward-api-qrhv4 deletion completed in 7.458368041s

• [SLOW TEST:19.010 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:53:07.815: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-smh6
STEP: Creating a pod to test atomic-volume-subpath
Dec 18 11:53:08.105: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-smh6" in namespace "e2e-tests-subpath-kgtb5" to be "success or failure"
Dec 18 11:53:08.114: INFO: Pod "pod-subpath-test-secret-smh6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.599327ms
Dec 18 11:53:10.132: INFO: Pod "pod-subpath-test-secret-smh6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0262647s
Dec 18 11:53:12.156: INFO: Pod "pod-subpath-test-secret-smh6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051024883s
Dec 18 11:53:14.327: INFO: Pod "pod-subpath-test-secret-smh6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.221371052s
Dec 18 11:53:16.344: INFO: Pod "pod-subpath-test-secret-smh6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.239057274s
Dec 18 11:53:18.362: INFO: Pod "pod-subpath-test-secret-smh6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.256158495s
Dec 18 11:53:20.384: INFO: Pod "pod-subpath-test-secret-smh6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.278610454s
Dec 18 11:53:22.411: INFO: Pod "pod-subpath-test-secret-smh6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.30508207s
Dec 18 11:53:24.458: INFO: Pod "pod-subpath-test-secret-smh6": Phase="Pending", Reason="", readiness=false. Elapsed: 16.352781168s
Dec 18 11:53:26.492: INFO: Pod "pod-subpath-test-secret-smh6": Phase="Running", Reason="", readiness=false. Elapsed: 18.386996741s
Dec 18 11:53:28.525: INFO: Pod "pod-subpath-test-secret-smh6": Phase="Running", Reason="", readiness=false. Elapsed: 20.419918989s
Dec 18 11:53:30.569: INFO: Pod "pod-subpath-test-secret-smh6": Phase="Running", Reason="", readiness=false. Elapsed: 22.463080327s
Dec 18 11:53:32.592: INFO: Pod "pod-subpath-test-secret-smh6": Phase="Running", Reason="", readiness=false. Elapsed: 24.486589966s
Dec 18 11:53:34.643: INFO: Pod "pod-subpath-test-secret-smh6": Phase="Running", Reason="", readiness=false. Elapsed: 26.537131745s
Dec 18 11:53:36.672: INFO: Pod "pod-subpath-test-secret-smh6": Phase="Running", Reason="", readiness=false. Elapsed: 28.566069408s
Dec 18 11:53:38.685: INFO: Pod "pod-subpath-test-secret-smh6": Phase="Running", Reason="", readiness=false. Elapsed: 30.579140506s
Dec 18 11:53:40.697: INFO: Pod "pod-subpath-test-secret-smh6": Phase="Running", Reason="", readiness=false. Elapsed: 32.591925432s
Dec 18 11:53:42.786: INFO: Pod "pod-subpath-test-secret-smh6": Phase="Running", Reason="", readiness=false. Elapsed: 34.680989789s
Dec 18 11:53:44.800: INFO: Pod "pod-subpath-test-secret-smh6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.6950216s
STEP: Saw pod success
Dec 18 11:53:44.801: INFO: Pod "pod-subpath-test-secret-smh6" satisfied condition "success or failure"
Dec 18 11:53:44.805: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-smh6 container test-container-subpath-secret-smh6: 
STEP: delete the pod
Dec 18 11:53:44.922: INFO: Waiting for pod pod-subpath-test-secret-smh6 to disappear
Dec 18 11:53:45.042: INFO: Pod pod-subpath-test-secret-smh6 no longer exists
STEP: Deleting pod pod-subpath-test-secret-smh6
Dec 18 11:53:45.042: INFO: Deleting pod "pod-subpath-test-secret-smh6" in namespace "e2e-tests-subpath-kgtb5"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:53:45.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-kgtb5" for this suite.
Dec 18 11:53:53.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:53:53.194: INFO: namespace: e2e-tests-subpath-kgtb5, resource: bindings, ignored listing per whitelist
Dec 18 11:53:53.329: INFO: namespace e2e-tests-subpath-kgtb5 deletion completed in 8.26068823s

• [SLOW TEST:45.515 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:53:53.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 18 11:53:53.550: INFO: Waiting up to 5m0s for pod "downwardapi-volume-10462202-218d-11ea-ad77-0242ac110004" in namespace "e2e-tests-projected-vw5zg" to be "success or failure"
Dec 18 11:53:53.579: INFO: Pod "downwardapi-volume-10462202-218d-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 29.148423ms
Dec 18 11:53:55.610: INFO: Pod "downwardapi-volume-10462202-218d-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060198366s
Dec 18 11:53:57.633: INFO: Pod "downwardapi-volume-10462202-218d-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082619567s
Dec 18 11:53:59.663: INFO: Pod "downwardapi-volume-10462202-218d-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113519624s
Dec 18 11:54:01.699: INFO: Pod "downwardapi-volume-10462202-218d-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.148664939s
Dec 18 11:54:03.738: INFO: Pod "downwardapi-volume-10462202-218d-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.188079986s
STEP: Saw pod success
Dec 18 11:54:03.738: INFO: Pod "downwardapi-volume-10462202-218d-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 11:54:03.747: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-10462202-218d-11ea-ad77-0242ac110004 container client-container: 
STEP: delete the pod
Dec 18 11:54:04.301: INFO: Waiting for pod downwardapi-volume-10462202-218d-11ea-ad77-0242ac110004 to disappear
Dec 18 11:54:04.455: INFO: Pod downwardapi-volume-10462202-218d-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:54:04.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-vw5zg" for this suite.
Dec 18 11:54:10.758: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:54:10.848: INFO: namespace: e2e-tests-projected-vw5zg, resource: bindings, ignored listing per whitelist
Dec 18 11:54:10.897: INFO: namespace e2e-tests-projected-vw5zg deletion completed in 6.297218188s

• [SLOW TEST:17.568 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:54:10.898: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-pkb2
STEP: Creating a pod to test atomic-volume-subpath
Dec 18 11:54:11.131: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-pkb2" in namespace "e2e-tests-subpath-t2ccd" to be "success or failure"
Dec 18 11:54:11.267: INFO: Pod "pod-subpath-test-projected-pkb2": Phase="Pending", Reason="", readiness=false. Elapsed: 135.587538ms
Dec 18 11:54:13.624: INFO: Pod "pod-subpath-test-projected-pkb2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.492718097s
Dec 18 11:54:15.659: INFO: Pod "pod-subpath-test-projected-pkb2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.528231497s
Dec 18 11:54:18.194: INFO: Pod "pod-subpath-test-projected-pkb2": Phase="Pending", Reason="", readiness=false. Elapsed: 7.062661944s
Dec 18 11:54:20.207: INFO: Pod "pod-subpath-test-projected-pkb2": Phase="Pending", Reason="", readiness=false. Elapsed: 9.076162s
Dec 18 11:54:22.219: INFO: Pod "pod-subpath-test-projected-pkb2": Phase="Pending", Reason="", readiness=false. Elapsed: 11.087882328s
Dec 18 11:54:24.245: INFO: Pod "pod-subpath-test-projected-pkb2": Phase="Pending", Reason="", readiness=false. Elapsed: 13.114204854s
Dec 18 11:54:26.275: INFO: Pod "pod-subpath-test-projected-pkb2": Phase="Pending", Reason="", readiness=false. Elapsed: 15.143684674s
Dec 18 11:54:28.925: INFO: Pod "pod-subpath-test-projected-pkb2": Phase="Pending", Reason="", readiness=false. Elapsed: 17.794421107s
Dec 18 11:54:30.939: INFO: Pod "pod-subpath-test-projected-pkb2": Phase="Running", Reason="", readiness=false. Elapsed: 19.807910981s
Dec 18 11:54:32.964: INFO: Pod "pod-subpath-test-projected-pkb2": Phase="Running", Reason="", readiness=false. Elapsed: 21.833505904s
Dec 18 11:54:34.980: INFO: Pod "pod-subpath-test-projected-pkb2": Phase="Running", Reason="", readiness=false. Elapsed: 23.848617368s
Dec 18 11:54:37.007: INFO: Pod "pod-subpath-test-projected-pkb2": Phase="Running", Reason="", readiness=false. Elapsed: 25.875595047s
Dec 18 11:54:39.020: INFO: Pod "pod-subpath-test-projected-pkb2": Phase="Running", Reason="", readiness=false. Elapsed: 27.889099009s
Dec 18 11:54:41.041: INFO: Pod "pod-subpath-test-projected-pkb2": Phase="Running", Reason="", readiness=false. Elapsed: 29.910345776s
Dec 18 11:54:43.056: INFO: Pod "pod-subpath-test-projected-pkb2": Phase="Running", Reason="", readiness=false. Elapsed: 31.925497889s
Dec 18 11:54:45.092: INFO: Pod "pod-subpath-test-projected-pkb2": Phase="Running", Reason="", readiness=false. Elapsed: 33.961050502s
Dec 18 11:54:47.109: INFO: Pod "pod-subpath-test-projected-pkb2": Phase="Running", Reason="", readiness=false. Elapsed: 35.977683841s
Dec 18 11:54:49.465: INFO: Pod "pod-subpath-test-projected-pkb2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.33386642s
STEP: Saw pod success
Dec 18 11:54:49.465: INFO: Pod "pod-subpath-test-projected-pkb2" satisfied condition "success or failure"
Dec 18 11:54:49.483: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-pkb2 container test-container-subpath-projected-pkb2: 
STEP: delete the pod
Dec 18 11:54:50.024: INFO: Waiting for pod pod-subpath-test-projected-pkb2 to disappear
Dec 18 11:54:50.139: INFO: Pod pod-subpath-test-projected-pkb2 no longer exists
STEP: Deleting pod pod-subpath-test-projected-pkb2
Dec 18 11:54:50.139: INFO: Deleting pod "pod-subpath-test-projected-pkb2" in namespace "e2e-tests-subpath-t2ccd"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:54:50.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-t2ccd" for this suite.
Dec 18 11:54:58.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:54:58.294: INFO: namespace: e2e-tests-subpath-t2ccd, resource: bindings, ignored listing per whitelist
Dec 18 11:54:58.445: INFO: namespace e2e-tests-subpath-t2ccd deletion completed in 8.280481782s

• [SLOW TEST:47.548 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:54:58.448: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 18 11:54:58.719: INFO: Waiting up to 5m0s for pod "downward-api-371e2315-218d-11ea-ad77-0242ac110004" in namespace "e2e-tests-downward-api-4pxqk" to be "success or failure"
Dec 18 11:54:58.743: INFO: Pod "downward-api-371e2315-218d-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 23.622254ms
Dec 18 11:55:01.122: INFO: Pod "downward-api-371e2315-218d-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.40305421s
Dec 18 11:55:03.150: INFO: Pod "downward-api-371e2315-218d-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.431004397s
Dec 18 11:55:05.526: INFO: Pod "downward-api-371e2315-218d-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.807193414s
Dec 18 11:55:07.555: INFO: Pod "downward-api-371e2315-218d-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.836090183s
Dec 18 11:55:09.581: INFO: Pod "downward-api-371e2315-218d-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.862347778s
Dec 18 11:55:12.042: INFO: Pod "downward-api-371e2315-218d-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.322669251s
STEP: Saw pod success
Dec 18 11:55:12.042: INFO: Pod "downward-api-371e2315-218d-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 11:55:12.051: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-371e2315-218d-11ea-ad77-0242ac110004 container dapi-container: 
STEP: delete the pod
Dec 18 11:55:12.655: INFO: Waiting for pod downward-api-371e2315-218d-11ea-ad77-0242ac110004 to disappear
Dec 18 11:55:12.763: INFO: Pod downward-api-371e2315-218d-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:55:12.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-4pxqk" for this suite.
Dec 18 11:55:18.888: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:55:18.941: INFO: namespace: e2e-tests-downward-api-4pxqk, resource: bindings, ignored listing per whitelist
Dec 18 11:55:19.132: INFO: namespace e2e-tests-downward-api-4pxqk deletion completed in 6.327798031s

• [SLOW TEST:20.684 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:55:19.133: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:55:19.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-qrc42" for this suite.
Dec 18 11:55:25.557: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:55:25.684: INFO: namespace: e2e-tests-kubelet-test-qrc42, resource: bindings, ignored listing per whitelist
Dec 18 11:55:25.761: INFO: namespace e2e-tests-kubelet-test-qrc42 deletion completed in 6.253393804s

• [SLOW TEST:6.628 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:55:25.762: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 18 11:55:26.044: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:55:38.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-knr89" for this suite.
Dec 18 11:56:22.814: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:56:22.849: INFO: namespace: e2e-tests-pods-knr89, resource: bindings, ignored listing per whitelist
Dec 18 11:56:23.057: INFO: namespace e2e-tests-pods-knr89 deletion completed in 44.284269242s

• [SLOW TEST:57.295 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:56:23.057: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Dec 18 11:56:47.457: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 18 11:56:47.569: INFO: Pod pod-with-prestop-http-hook still exists
Dec 18 11:56:49.570: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 18 11:56:49.590: INFO: Pod pod-with-prestop-http-hook still exists
Dec 18 11:56:51.570: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 18 11:56:51.582: INFO: Pod pod-with-prestop-http-hook still exists
Dec 18 11:56:53.570: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 18 11:56:53.591: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:56:53.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-mmv7h" for this suite.
Dec 18 11:57:17.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:57:17.870: INFO: namespace: e2e-tests-container-lifecycle-hook-mmv7h, resource: bindings, ignored listing per whitelist
Dec 18 11:57:17.899: INFO: namespace e2e-tests-container-lifecycle-hook-mmv7h deletion completed in 24.226313652s

• [SLOW TEST:54.842 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:57:17.899: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Dec 18 11:57:18.117: INFO: Waiting up to 5m0s for pod "pod-8a353ea1-218d-11ea-ad77-0242ac110004" in namespace "e2e-tests-emptydir-nsw6t" to be "success or failure"
Dec 18 11:57:18.146: INFO: Pod "pod-8a353ea1-218d-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 28.964574ms
Dec 18 11:57:20.361: INFO: Pod "pod-8a353ea1-218d-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.244067082s
Dec 18 11:57:22.402: INFO: Pod "pod-8a353ea1-218d-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.284920109s
Dec 18 11:57:25.613: INFO: Pod "pod-8a353ea1-218d-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.496219508s
Dec 18 11:57:27.632: INFO: Pod "pod-8a353ea1-218d-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.515397283s
Dec 18 11:57:29.648: INFO: Pod "pod-8a353ea1-218d-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.531807495s
STEP: Saw pod success
Dec 18 11:57:29.649: INFO: Pod "pod-8a353ea1-218d-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 11:57:29.662: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-8a353ea1-218d-11ea-ad77-0242ac110004 container test-container: 
STEP: delete the pod
Dec 18 11:57:30.645: INFO: Waiting for pod pod-8a353ea1-218d-11ea-ad77-0242ac110004 to disappear
Dec 18 11:57:30.964: INFO: Pod pod-8a353ea1-218d-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:57:30.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-nsw6t" for this suite.
Dec 18 11:57:37.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:57:37.282: INFO: namespace: e2e-tests-emptydir-nsw6t, resource: bindings, ignored listing per whitelist
Dec 18 11:57:37.333: INFO: namespace e2e-tests-emptydir-nsw6t deletion completed in 6.339808396s

• [SLOW TEST:19.434 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:57:37.334: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Dec 18 11:57:48.151: INFO: Successfully updated pod "annotationupdate95c6669a-218d-11ea-ad77-0242ac110004"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:57:50.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-d8x7n" for this suite.
Dec 18 11:58:30.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:58:30.941: INFO: namespace: e2e-tests-downward-api-d8x7n, resource: bindings, ignored listing per whitelist
Dec 18 11:58:31.031: INFO: namespace e2e-tests-downward-api-d8x7n deletion completed in 40.790340118s

• [SLOW TEST:53.697 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:58:31.032: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-b5d656e7-218d-11ea-ad77-0242ac110004
STEP: Creating a pod to test consume secrets
Dec 18 11:58:31.515: INFO: Waiting up to 5m0s for pod "pod-secrets-b5f33222-218d-11ea-ad77-0242ac110004" in namespace "e2e-tests-secrets-pp8t4" to be "success or failure"
Dec 18 11:58:31.528: INFO: Pod "pod-secrets-b5f33222-218d-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.993515ms
Dec 18 11:58:33.678: INFO: Pod "pod-secrets-b5f33222-218d-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.162522288s
Dec 18 11:58:35.697: INFO: Pod "pod-secrets-b5f33222-218d-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.18100596s
Dec 18 11:58:38.411: INFO: Pod "pod-secrets-b5f33222-218d-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.895104261s
Dec 18 11:58:40.447: INFO: Pod "pod-secrets-b5f33222-218d-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.931297205s
Dec 18 11:58:42.561: INFO: Pod "pod-secrets-b5f33222-218d-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.045009948s
STEP: Saw pod success
Dec 18 11:58:42.561: INFO: Pod "pod-secrets-b5f33222-218d-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 11:58:42.574: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-b5f33222-218d-11ea-ad77-0242ac110004 container secret-volume-test: 
STEP: delete the pod
Dec 18 11:58:42.766: INFO: Waiting for pod pod-secrets-b5f33222-218d-11ea-ad77-0242ac110004 to disappear
Dec 18 11:58:42.781: INFO: Pod pod-secrets-b5f33222-218d-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:58:42.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-pp8t4" for this suite.
Dec 18 11:58:48.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:58:48.967: INFO: namespace: e2e-tests-secrets-pp8t4, resource: bindings, ignored listing per whitelist
Dec 18 11:58:49.048: INFO: namespace e2e-tests-secrets-pp8t4 deletion completed in 6.252547694s

• [SLOW TEST:18.016 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:58:49.049: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-c08921a4-218d-11ea-ad77-0242ac110004
STEP: Creating a pod to test consume configMaps
Dec 18 11:58:49.406: INFO: Waiting up to 5m0s for pod "pod-configmaps-c09c157a-218d-11ea-ad77-0242ac110004" in namespace "e2e-tests-configmap-thd5h" to be "success or failure"
Dec 18 11:58:49.417: INFO: Pod "pod-configmaps-c09c157a-218d-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.994767ms
Dec 18 11:58:51.660: INFO: Pod "pod-configmaps-c09c157a-218d-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.25373817s
Dec 18 11:58:53.681: INFO: Pod "pod-configmaps-c09c157a-218d-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.275390907s
Dec 18 11:58:55.949: INFO: Pod "pod-configmaps-c09c157a-218d-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.542753424s
Dec 18 11:58:57.964: INFO: Pod "pod-configmaps-c09c157a-218d-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.557800268s
Dec 18 11:58:59.982: INFO: Pod "pod-configmaps-c09c157a-218d-11ea-ad77-0242ac110004": Phase="Running", Reason="", readiness=true. Elapsed: 10.576352261s
Dec 18 11:59:01.994: INFO: Pod "pod-configmaps-c09c157a-218d-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.587785943s
STEP: Saw pod success
Dec 18 11:59:01.994: INFO: Pod "pod-configmaps-c09c157a-218d-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 11:59:01.998: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-c09c157a-218d-11ea-ad77-0242ac110004 container configmap-volume-test: 
STEP: delete the pod
Dec 18 11:59:02.732: INFO: Waiting for pod pod-configmaps-c09c157a-218d-11ea-ad77-0242ac110004 to disappear
Dec 18 11:59:02.986: INFO: Pod pod-configmaps-c09c157a-218d-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:59:02.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-thd5h" for this suite.
Dec 18 11:59:09.067: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:59:10.190: INFO: namespace: e2e-tests-configmap-thd5h, resource: bindings, ignored listing per whitelist
Dec 18 11:59:10.370: INFO: namespace e2e-tests-configmap-thd5h deletion completed in 7.369036259s

• [SLOW TEST:21.322 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:59:10.371: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Dec 18 11:59:10.767: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 18 11:59:10.781: INFO: Waiting for terminating namespaces to be deleted...
Dec 18 11:59:10.786: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Dec 18 11:59:10.800: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 18 11:59:10.800: INFO: 	Container coredns ready: true, restart count 0
Dec 18 11:59:10.800: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Dec 18 11:59:10.800: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 18 11:59:10.800: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 18 11:59:10.800: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Dec 18 11:59:10.800: INFO: 	Container weave ready: true, restart count 0
Dec 18 11:59:10.800: INFO: 	Container weave-npc ready: true, restart count 0
Dec 18 11:59:10.800: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 18 11:59:10.800: INFO: 	Container coredns ready: true, restart count 0
Dec 18 11:59:10.800: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 18 11:59:10.800: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 18 11:59:10.800: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-d4a91bdc-218d-11ea-ad77-0242ac110004 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-d4a91bdc-218d-11ea-ad77-0242ac110004 off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label kubernetes.io/e2e-d4a91bdc-218d-11ea-ad77-0242ac110004
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 11:59:35.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-jzts8" for this suite.
Dec 18 11:59:59.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 11:59:59.589: INFO: namespace: e2e-tests-sched-pred-jzts8, resource: bindings, ignored listing per whitelist
Dec 18 11:59:59.699: INFO: namespace e2e-tests-sched-pred-jzts8 deletion completed in 24.441428043s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:49.329 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 11:59:59.701: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Dec 18 12:00:12.675: INFO: Successfully updated pod "pod-update-eab409f7-218d-11ea-ad77-0242ac110004"
STEP: verifying the updated pod is in kubernetes
Dec 18 12:00:12.786: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:00:12.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-k9r8h" for this suite.
Dec 18 12:00:36.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:00:36.946: INFO: namespace: e2e-tests-pods-k9r8h, resource: bindings, ignored listing per whitelist
Dec 18 12:00:36.954: INFO: namespace e2e-tests-pods-k9r8h deletion completed in 24.157979852s

• [SLOW TEST:37.254 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:00:36.955: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Dec 18 12:00:37.130: INFO: PodSpec: initContainers in spec.initContainers
Dec 18 12:01:47.386: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-00d5c637-218e-11ea-ad77-0242ac110004", GenerateName:"", Namespace:"e2e-tests-init-container-dtkbf", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-dtkbf/pods/pod-init-00d5c637-218e-11ea-ad77-0242ac110004", UID:"00d6f5b7-218e-11ea-a994-fa163e34d433", ResourceVersion:"15230092", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63712267237, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"130420523"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-2hltc", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0020826c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2hltc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2hltc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2hltc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001d6a7f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002436060), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001d6ab70)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001d6ab90)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001d6ab98), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001d6ab9c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712267237, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712267237, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712267237, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712267237, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc001e1c100), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0016c05b0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0016c0620)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://975776fb651892377ca783f6c215dbdcf32cbc2431cda262614b974599631c93"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001e1c140), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001e1c120), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:01:47.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-dtkbf" for this suite.
Dec 18 12:02:11.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:02:11.688: INFO: namespace: e2e-tests-init-container-dtkbf, resource: bindings, ignored listing per whitelist
Dec 18 12:02:11.712: INFO: namespace e2e-tests-init-container-dtkbf deletion completed in 24.26173683s

• [SLOW TEST:94.757 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:02:11.713: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Dec 18 12:02:34.447: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 18 12:02:34.459: INFO: Pod pod-with-poststart-http-hook still exists
Dec 18 12:02:36.460: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 18 12:02:36.511: INFO: Pod pod-with-poststart-http-hook still exists
Dec 18 12:02:38.460: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 18 12:02:38.486: INFO: Pod pod-with-poststart-http-hook still exists
Dec 18 12:02:40.462: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 18 12:02:40.494: INFO: Pod pod-with-poststart-http-hook still exists
Dec 18 12:02:42.461: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 18 12:02:42.483: INFO: Pod pod-with-poststart-http-hook still exists
Dec 18 12:02:44.460: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 18 12:02:44.482: INFO: Pod pod-with-poststart-http-hook still exists
Dec 18 12:02:46.460: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 18 12:02:46.526: INFO: Pod pod-with-poststart-http-hook still exists
Dec 18 12:02:48.460: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 18 12:02:48.496: INFO: Pod pod-with-poststart-http-hook still exists
Dec 18 12:02:50.460: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 18 12:02:50.492: INFO: Pod pod-with-poststart-http-hook still exists
Dec 18 12:02:52.460: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 18 12:02:52.481: INFO: Pod pod-with-poststart-http-hook still exists
Dec 18 12:02:54.460: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 18 12:02:54.483: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:02:54.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-kgsbz" for this suite.
Dec 18 12:03:18.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:03:18.868: INFO: namespace: e2e-tests-container-lifecycle-hook-kgsbz, resource: bindings, ignored listing per whitelist
Dec 18 12:03:18.908: INFO: namespace e2e-tests-container-lifecycle-hook-kgsbz deletion completed in 24.415050433s

• [SLOW TEST:67.196 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:03:18.909: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-6167751f-218e-11ea-ad77-0242ac110004
STEP: Creating a pod to test consume secrets
Dec 18 12:03:19.200: INFO: Waiting up to 5m0s for pod "pod-secrets-616803db-218e-11ea-ad77-0242ac110004" in namespace "e2e-tests-secrets-vpshg" to be "success or failure"
Dec 18 12:03:19.290: INFO: Pod "pod-secrets-616803db-218e-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 88.98375ms
Dec 18 12:03:21.624: INFO: Pod "pod-secrets-616803db-218e-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.42373352s
Dec 18 12:03:23.646: INFO: Pod "pod-secrets-616803db-218e-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.44522296s
Dec 18 12:03:27.148: INFO: Pod "pod-secrets-616803db-218e-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.947565296s
Dec 18 12:03:29.169: INFO: Pod "pod-secrets-616803db-218e-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.968488085s
Dec 18 12:03:31.197: INFO: Pod "pod-secrets-616803db-218e-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.996409741s
STEP: Saw pod success
Dec 18 12:03:31.197: INFO: Pod "pod-secrets-616803db-218e-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 12:03:31.204: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-616803db-218e-11ea-ad77-0242ac110004 container secret-env-test: 
STEP: delete the pod
Dec 18 12:03:31.378: INFO: Waiting for pod pod-secrets-616803db-218e-11ea-ad77-0242ac110004 to disappear
Dec 18 12:03:31.508: INFO: Pod pod-secrets-616803db-218e-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:03:31.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-vpshg" for this suite.
Dec 18 12:03:37.581: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:03:37.747: INFO: namespace: e2e-tests-secrets-vpshg, resource: bindings, ignored listing per whitelist
Dec 18 12:03:37.922: INFO: namespace e2e-tests-secrets-vpshg deletion completed in 6.40441412s

• [SLOW TEST:19.013 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:03:37.922: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Dec 18 12:03:38.148: INFO: Waiting up to 5m0s for pod "pod-6cb9e8fb-218e-11ea-ad77-0242ac110004" in namespace "e2e-tests-emptydir-fftdw" to be "success or failure"
Dec 18 12:03:38.186: INFO: Pod "pod-6cb9e8fb-218e-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 37.920482ms
Dec 18 12:03:40.489: INFO: Pod "pod-6cb9e8fb-218e-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.341618038s
Dec 18 12:03:42.514: INFO: Pod "pod-6cb9e8fb-218e-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.366218282s
Dec 18 12:03:44.541: INFO: Pod "pod-6cb9e8fb-218e-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.393603452s
Dec 18 12:03:46.573: INFO: Pod "pod-6cb9e8fb-218e-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.425336105s
Dec 18 12:03:48.630: INFO: Pod "pod-6cb9e8fb-218e-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.482014861s
Dec 18 12:03:50.642: INFO: Pod "pod-6cb9e8fb-218e-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.49464315s
STEP: Saw pod success
Dec 18 12:03:50.642: INFO: Pod "pod-6cb9e8fb-218e-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 12:03:50.655: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-6cb9e8fb-218e-11ea-ad77-0242ac110004 container test-container: 
STEP: delete the pod
Dec 18 12:03:51.514: INFO: Waiting for pod pod-6cb9e8fb-218e-11ea-ad77-0242ac110004 to disappear
Dec 18 12:03:51.804: INFO: Pod pod-6cb9e8fb-218e-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:03:51.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-fftdw" for this suite.
Dec 18 12:03:57.870: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:03:57.945: INFO: namespace: e2e-tests-emptydir-fftdw, resource: bindings, ignored listing per whitelist
Dec 18 12:03:58.094: INFO: namespace e2e-tests-emptydir-fftdw deletion completed in 6.268456111s

• [SLOW TEST:20.172 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:03:58.094: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Dec 18 12:03:58.287: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:04:15.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-wpj5g" for this suite.
Dec 18 12:04:22.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:04:22.272: INFO: namespace: e2e-tests-init-container-wpj5g, resource: bindings, ignored listing per whitelist
Dec 18 12:04:22.323: INFO: namespace e2e-tests-init-container-wpj5g deletion completed in 6.284166347s

• [SLOW TEST:24.229 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:04:22.324: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 18 12:04:22.954: INFO: Waiting up to 5m0s for pod "downwardapi-volume-87641545-218e-11ea-ad77-0242ac110004" in namespace "e2e-tests-projected-bj4dr" to be "success or failure"
Dec 18 12:04:22.966: INFO: Pod "downwardapi-volume-87641545-218e-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 12.142452ms
Dec 18 12:04:24.987: INFO: Pod "downwardapi-volume-87641545-218e-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032870183s
Dec 18 12:04:27.024: INFO: Pod "downwardapi-volume-87641545-218e-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069300961s
Dec 18 12:04:29.739: INFO: Pod "downwardapi-volume-87641545-218e-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.785044247s
Dec 18 12:04:31.767: INFO: Pod "downwardapi-volume-87641545-218e-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.812256583s
Dec 18 12:04:33.795: INFO: Pod "downwardapi-volume-87641545-218e-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.840509178s
STEP: Saw pod success
Dec 18 12:04:33.795: INFO: Pod "downwardapi-volume-87641545-218e-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 12:04:33.841: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-87641545-218e-11ea-ad77-0242ac110004 container client-container: 
STEP: delete the pod
Dec 18 12:04:35.333: INFO: Waiting for pod downwardapi-volume-87641545-218e-11ea-ad77-0242ac110004 to disappear
Dec 18 12:04:35.382: INFO: Pod downwardapi-volume-87641545-218e-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:04:35.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-bj4dr" for this suite.
Dec 18 12:04:41.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:04:41.636: INFO: namespace: e2e-tests-projected-bj4dr, resource: bindings, ignored listing per whitelist
Dec 18 12:04:41.647: INFO: namespace e2e-tests-projected-bj4dr deletion completed in 6.251765766s

• [SLOW TEST:19.323 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:04:41.648: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Dec 18 12:04:43.041: INFO: Pod name wrapped-volume-race-935b4121-218e-11ea-ad77-0242ac110004: Found 0 pods out of 5
Dec 18 12:04:48.064: INFO: Pod name wrapped-volume-race-935b4121-218e-11ea-ad77-0242ac110004: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-935b4121-218e-11ea-ad77-0242ac110004 in namespace e2e-tests-emptydir-wrapper-x2h9c, will wait for the garbage collector to delete the pods
Dec 18 12:06:52.254: INFO: Deleting ReplicationController wrapped-volume-race-935b4121-218e-11ea-ad77-0242ac110004 took: 35.90861ms
Dec 18 12:06:52.555: INFO: Terminating ReplicationController wrapped-volume-race-935b4121-218e-11ea-ad77-0242ac110004 pods took: 301.701685ms
STEP: Creating RC which spawns configmap-volume pods
Dec 18 12:07:43.155: INFO: Pod name wrapped-volume-race-fea5c8f6-218e-11ea-ad77-0242ac110004: Found 0 pods out of 5
Dec 18 12:07:48.187: INFO: Pod name wrapped-volume-race-fea5c8f6-218e-11ea-ad77-0242ac110004: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-fea5c8f6-218e-11ea-ad77-0242ac110004 in namespace e2e-tests-emptydir-wrapper-x2h9c, will wait for the garbage collector to delete the pods
Dec 18 12:09:30.608: INFO: Deleting ReplicationController wrapped-volume-race-fea5c8f6-218e-11ea-ad77-0242ac110004 took: 42.684218ms
Dec 18 12:09:31.009: INFO: Terminating ReplicationController wrapped-volume-race-fea5c8f6-218e-11ea-ad77-0242ac110004 pods took: 401.169059ms
STEP: Creating RC which spawns configmap-volume pods
Dec 18 12:10:22.792: INFO: Pod name wrapped-volume-race-5dde5b3a-218f-11ea-ad77-0242ac110004: Found 0 pods out of 5
Dec 18 12:10:27.835: INFO: Pod name wrapped-volume-race-5dde5b3a-218f-11ea-ad77-0242ac110004: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-5dde5b3a-218f-11ea-ad77-0242ac110004 in namespace e2e-tests-emptydir-wrapper-x2h9c, will wait for the garbage collector to delete the pods
Dec 18 12:12:24.389: INFO: Deleting ReplicationController wrapped-volume-race-5dde5b3a-218f-11ea-ad77-0242ac110004 took: 42.604759ms
Dec 18 12:12:24.990: INFO: Terminating ReplicationController wrapped-volume-race-5dde5b3a-218f-11ea-ad77-0242ac110004 pods took: 601.071439ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:13:10.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-x2h9c" for this suite.
Dec 18 12:13:20.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:13:20.409: INFO: namespace: e2e-tests-emptydir-wrapper-x2h9c, resource: bindings, ignored listing per whitelist
Dec 18 12:13:20.415: INFO: namespace e2e-tests-emptydir-wrapper-x2h9c deletion completed in 10.16987877s

• [SLOW TEST:518.768 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:13:20.416: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
Dec 18 12:13:21.407: INFO: created pod pod-service-account-defaultsa
Dec 18 12:13:21.407: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Dec 18 12:13:21.461: INFO: created pod pod-service-account-mountsa
Dec 18 12:13:21.461: INFO: pod pod-service-account-mountsa service account token volume mount: true
Dec 18 12:13:21.584: INFO: created pod pod-service-account-nomountsa
Dec 18 12:13:21.584: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Dec 18 12:13:21.627: INFO: created pod pod-service-account-defaultsa-mountspec
Dec 18 12:13:21.627: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Dec 18 12:13:21.781: INFO: created pod pod-service-account-mountsa-mountspec
Dec 18 12:13:21.782: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Dec 18 12:13:21.873: INFO: created pod pod-service-account-nomountsa-mountspec
Dec 18 12:13:21.874: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Dec 18 12:13:22.032: INFO: created pod pod-service-account-defaultsa-nomountspec
Dec 18 12:13:22.032: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Dec 18 12:13:22.076: INFO: created pod pod-service-account-mountsa-nomountspec
Dec 18 12:13:22.077: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Dec 18 12:13:23.403: INFO: created pod pod-service-account-nomountsa-nomountspec
Dec 18 12:13:23.403: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:13:23.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-jzqk5" for this suite.
Dec 18 12:14:00.723: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:14:01.082: INFO: namespace: e2e-tests-svcaccounts-jzqk5, resource: bindings, ignored listing per whitelist
Dec 18 12:14:01.137: INFO: namespace e2e-tests-svcaccounts-jzqk5 deletion completed in 36.844804417s

• [SLOW TEST:40.721 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:14:01.138: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Dec 18 12:14:01.398: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:14:20.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-b4g2p" for this suite.
Dec 18 12:14:26.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:14:26.337: INFO: namespace: e2e-tests-init-container-b4g2p, resource: bindings, ignored listing per whitelist
Dec 18 12:14:26.350: INFO: namespace e2e-tests-init-container-b4g2p deletion completed in 6.196484565s

• [SLOW TEST:25.212 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:14:26.351: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Dec 18 12:14:36.655: INFO: Pod pod-hostip-ef3f4d20-218f-11ea-ad77-0242ac110004 has hostIP: 10.96.1.240
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:14:36.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-pstv6" for this suite.
Dec 18 12:15:02.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:15:02.896: INFO: namespace: e2e-tests-pods-pstv6, resource: bindings, ignored listing per whitelist
Dec 18 12:15:02.950: INFO: namespace e2e-tests-pods-pstv6 deletion completed in 26.290013765s

• [SLOW TEST:36.599 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:15:02.950: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 18 12:15:03.165: INFO: Waiting up to 5m0s for pod "downwardapi-volume-04fa8bb3-2190-11ea-ad77-0242ac110004" in namespace "e2e-tests-downward-api-2wtn8" to be "success or failure"
Dec 18 12:15:03.183: INFO: Pod "downwardapi-volume-04fa8bb3-2190-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 17.977099ms
Dec 18 12:15:05.624: INFO: Pod "downwardapi-volume-04fa8bb3-2190-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.458594874s
Dec 18 12:15:07.656: INFO: Pod "downwardapi-volume-04fa8bb3-2190-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.490751151s
Dec 18 12:15:09.700: INFO: Pod "downwardapi-volume-04fa8bb3-2190-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.535120987s
Dec 18 12:15:11.717: INFO: Pod "downwardapi-volume-04fa8bb3-2190-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.551806252s
Dec 18 12:15:13.733: INFO: Pod "downwardapi-volume-04fa8bb3-2190-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.567695913s
STEP: Saw pod success
Dec 18 12:15:13.733: INFO: Pod "downwardapi-volume-04fa8bb3-2190-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 12:15:13.741: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-04fa8bb3-2190-11ea-ad77-0242ac110004 container client-container: 
STEP: delete the pod
Dec 18 12:15:14.426: INFO: Waiting for pod downwardapi-volume-04fa8bb3-2190-11ea-ad77-0242ac110004 to disappear
Dec 18 12:15:14.445: INFO: Pod downwardapi-volume-04fa8bb3-2190-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:15:14.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-2wtn8" for this suite.
Dec 18 12:15:20.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:15:20.735: INFO: namespace: e2e-tests-downward-api-2wtn8, resource: bindings, ignored listing per whitelist
Dec 18 12:15:20.827: INFO: namespace e2e-tests-downward-api-2wtn8 deletion completed in 6.372101692s

• [SLOW TEST:17.877 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:15:20.829: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 18 12:15:29.144: INFO: Waiting up to 5m0s for pod "client-envvars-147e4c8d-2190-11ea-ad77-0242ac110004" in namespace "e2e-tests-pods-2bptz" to be "success or failure"
Dec 18 12:15:29.254: INFO: Pod "client-envvars-147e4c8d-2190-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 109.183978ms
Dec 18 12:15:31.270: INFO: Pod "client-envvars-147e4c8d-2190-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125225717s
Dec 18 12:15:33.295: INFO: Pod "client-envvars-147e4c8d-2190-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.150680926s
Dec 18 12:15:35.466: INFO: Pod "client-envvars-147e4c8d-2190-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.321436959s
Dec 18 12:15:37.814: INFO: Pod "client-envvars-147e4c8d-2190-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.669453534s
Dec 18 12:15:40.054: INFO: Pod "client-envvars-147e4c8d-2190-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.909641669s
STEP: Saw pod success
Dec 18 12:15:40.055: INFO: Pod "client-envvars-147e4c8d-2190-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 12:15:40.074: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-147e4c8d-2190-11ea-ad77-0242ac110004 container env3cont: 
STEP: delete the pod
Dec 18 12:15:40.283: INFO: Waiting for pod client-envvars-147e4c8d-2190-11ea-ad77-0242ac110004 to disappear
Dec 18 12:15:40.304: INFO: Pod client-envvars-147e4c8d-2190-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:15:40.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-2bptz" for this suite.
Dec 18 12:16:36.368: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:16:36.638: INFO: namespace: e2e-tests-pods-2bptz, resource: bindings, ignored listing per whitelist
Dec 18 12:16:36.689: INFO: namespace e2e-tests-pods-2bptz deletion completed in 56.37365882s

• [SLOW TEST:75.861 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:16:36.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Dec 18 12:16:36.894: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 18 12:16:36.912: INFO: Waiting for terminating namespaces to be deleted...
Dec 18 12:16:36.918: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Dec 18 12:16:36.934: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 18 12:16:36.934: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 18 12:16:36.934: INFO: 	Container coredns ready: true, restart count 0
Dec 18 12:16:36.934: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Dec 18 12:16:36.934: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 18 12:16:36.934: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 18 12:16:36.934: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Dec 18 12:16:36.934: INFO: 	Container weave ready: true, restart count 0
Dec 18 12:16:36.934: INFO: 	Container weave-npc ready: true, restart count 0
Dec 18 12:16:36.934: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 18 12:16:36.934: INFO: 	Container coredns ready: true, restart count 0
Dec 18 12:16:36.934: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 18 12:16:36.934: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-server-hu5at5svl7ps
Dec 18 12:16:37.111: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Dec 18 12:16:37.111: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Dec 18 12:16:37.111: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Dec 18 12:16:37.111: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps
Dec 18 12:16:37.111: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps
Dec 18 12:16:37.111: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Dec 18 12:16:37.111: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Dec 18 12:16:37.111: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-3d07548c-2190-11ea-ad77-0242ac110004.15e17648422bf9f9], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-lpxjq/filler-pod-3d07548c-2190-11ea-ad77-0242ac110004 to hunter-server-hu5at5svl7ps]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-3d07548c-2190-11ea-ad77-0242ac110004.15e176495af4705f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-3d07548c-2190-11ea-ad77-0242ac110004.15e17649cc6bc691], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-3d07548c-2190-11ea-ad77-0242ac110004.15e1764a070e1e9f], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15e1764a9c0ccced], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.]
STEP: removing the label node off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:16:48.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-lpxjq" for this suite.
Dec 18 12:16:56.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:16:57.011: INFO: namespace: e2e-tests-sched-pred-lpxjq, resource: bindings, ignored listing per whitelist
Dec 18 12:16:57.011: INFO: namespace e2e-tests-sched-pred-lpxjq deletion completed in 8.353513874s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:20.322 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:16:57.012: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:16:58.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-c7r5n" for this suite.
Dec 18 12:17:04.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:17:04.217: INFO: namespace: e2e-tests-services-c7r5n, resource: bindings, ignored listing per whitelist
Dec 18 12:17:04.285: INFO: namespace e2e-tests-services-c7r5n deletion completed in 6.194164765s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:7.273 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:17:04.285: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-4d764c8d-2190-11ea-ad77-0242ac110004
STEP: Creating a pod to test consume secrets
Dec 18 12:17:04.734: INFO: Waiting up to 5m0s for pod "pod-secrets-4d78750f-2190-11ea-ad77-0242ac110004" in namespace "e2e-tests-secrets-ps2mp" to be "success or failure"
Dec 18 12:17:04.750: INFO: Pod "pod-secrets-4d78750f-2190-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 15.53927ms
Dec 18 12:17:06.860: INFO: Pod "pod-secrets-4d78750f-2190-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125023692s
Dec 18 12:17:08.928: INFO: Pod "pod-secrets-4d78750f-2190-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.193383617s
Dec 18 12:17:10.947: INFO: Pod "pod-secrets-4d78750f-2190-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.211965217s
Dec 18 12:17:12.991: INFO: Pod "pod-secrets-4d78750f-2190-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.256277267s
Dec 18 12:17:15.012: INFO: Pod "pod-secrets-4d78750f-2190-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.277373455s
STEP: Saw pod success
Dec 18 12:17:15.012: INFO: Pod "pod-secrets-4d78750f-2190-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 12:17:15.021: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-4d78750f-2190-11ea-ad77-0242ac110004 container secret-volume-test: 
STEP: delete the pod
Dec 18 12:17:15.322: INFO: Waiting for pod pod-secrets-4d78750f-2190-11ea-ad77-0242ac110004 to disappear
Dec 18 12:17:15.340: INFO: Pod pod-secrets-4d78750f-2190-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:17:15.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-ps2mp" for this suite.
Dec 18 12:17:21.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:17:21.539: INFO: namespace: e2e-tests-secrets-ps2mp, resource: bindings, ignored listing per whitelist
Dec 18 12:17:21.636: INFO: namespace e2e-tests-secrets-ps2mp deletion completed in 6.286778698s

• [SLOW TEST:17.351 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:17:21.637: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Dec 18 12:17:21.740: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Dec 18 12:17:21.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-lfjf9'
Dec 18 12:17:24.052: INFO: stderr: ""
Dec 18 12:17:24.052: INFO: stdout: "service/redis-slave created\n"
Dec 18 12:17:24.053: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Dec 18 12:17:24.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-lfjf9'
Dec 18 12:17:24.689: INFO: stderr: ""
Dec 18 12:17:24.689: INFO: stdout: "service/redis-master created\n"
Dec 18 12:17:24.690: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Dec 18 12:17:24.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-lfjf9'
Dec 18 12:17:25.342: INFO: stderr: ""
Dec 18 12:17:25.342: INFO: stdout: "service/frontend created\n"
Dec 18 12:17:25.343: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Dec 18 12:17:25.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-lfjf9'
Dec 18 12:17:25.723: INFO: stderr: ""
Dec 18 12:17:25.724: INFO: stdout: "deployment.extensions/frontend created\n"
Dec 18 12:17:25.725: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Dec 18 12:17:25.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-lfjf9'
Dec 18 12:17:26.323: INFO: stderr: ""
Dec 18 12:17:26.323: INFO: stdout: "deployment.extensions/redis-master created\n"
Dec 18 12:17:26.325: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Dec 18 12:17:26.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-lfjf9'
Dec 18 12:17:26.797: INFO: stderr: ""
Dec 18 12:17:26.798: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Dec 18 12:17:26.798: INFO: Waiting for all frontend pods to be Running.
Dec 18 12:17:51.852: INFO: Waiting for frontend to serve content.
Dec 18 12:17:54.654: INFO: Trying to add a new entry to the guestbook.
Dec 18 12:17:54.711: INFO: Verifying that added entry can be retrieved.
Dec 18 12:17:54.731: INFO: Failed to get response from guestbook. err: , response: {"data": ""}
STEP: using delete to clean up resources
Dec 18 12:17:59.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-lfjf9'
Dec 18 12:18:00.146: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 18 12:18:00.147: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Dec 18 12:18:00.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-lfjf9'
Dec 18 12:18:00.508: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 18 12:18:00.508: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Dec 18 12:18:00.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-lfjf9'
Dec 18 12:18:00.908: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 18 12:18:00.908: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Dec 18 12:18:00.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-lfjf9'
Dec 18 12:18:01.157: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 18 12:18:01.157: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Dec 18 12:18:01.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-lfjf9'
Dec 18 12:18:01.441: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 18 12:18:01.441: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Dec 18 12:18:01.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-lfjf9'
Dec 18 12:18:01.688: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 18 12:18:01.688: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:18:01.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-lfjf9" for this suite.
Dec 18 12:19:02.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:19:02.334: INFO: namespace: e2e-tests-kubectl-lfjf9, resource: bindings, ignored listing per whitelist
Dec 18 12:19:02.348: INFO: namespace e2e-tests-kubectl-lfjf9 deletion completed in 1m0.632112569s

• [SLOW TEST:100.712 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:19:02.349: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 18 12:19:02.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-x2rgk'
Dec 18 12:19:02.789: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 18 12:19:02.789: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Dec 18 12:19:07.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-x2rgk'
Dec 18 12:19:07.583: INFO: stderr: ""
Dec 18 12:19:07.583: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:19:07.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-x2rgk" for this suite.
Dec 18 12:19:13.738: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:19:14.024: INFO: namespace: e2e-tests-kubectl-x2rgk, resource: bindings, ignored listing per whitelist
Dec 18 12:19:14.061: INFO: namespace e2e-tests-kubectl-x2rgk deletion completed in 6.444696987s

• [SLOW TEST:11.713 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:19:14.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 18 12:19:14.593: INFO: Waiting up to 5m0s for pod "downward-api-9ad0b5e4-2190-11ea-ad77-0242ac110004" in namespace "e2e-tests-downward-api-8kv8b" to be "success or failure"
Dec 18 12:19:14.669: INFO: Pod "downward-api-9ad0b5e4-2190-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 76.526037ms
Dec 18 12:19:16.680: INFO: Pod "downward-api-9ad0b5e4-2190-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087406736s
Dec 18 12:19:18.693: INFO: Pod "downward-api-9ad0b5e4-2190-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099804948s
Dec 18 12:19:20.731: INFO: Pod "downward-api-9ad0b5e4-2190-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.138157328s
Dec 18 12:19:23.002: INFO: Pod "downward-api-9ad0b5e4-2190-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.409461375s
Dec 18 12:19:25.015: INFO: Pod "downward-api-9ad0b5e4-2190-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.422671676s
STEP: Saw pod success
Dec 18 12:19:25.016: INFO: Pod "downward-api-9ad0b5e4-2190-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 12:19:25.022: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-9ad0b5e4-2190-11ea-ad77-0242ac110004 container dapi-container: 
STEP: delete the pod
Dec 18 12:19:25.074: INFO: Waiting for pod downward-api-9ad0b5e4-2190-11ea-ad77-0242ac110004 to disappear
Dec 18 12:19:25.329: INFO: Pod downward-api-9ad0b5e4-2190-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:19:25.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-8kv8b" for this suite.
Dec 18 12:19:31.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:19:31.610: INFO: namespace: e2e-tests-downward-api-8kv8b, resource: bindings, ignored listing per whitelist
Dec 18 12:19:31.623: INFO: namespace e2e-tests-downward-api-8kv8b deletion completed in 6.262189784s

• [SLOW TEST:17.560 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:19:31.623: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-5956h
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 18 12:19:31.779: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 18 12:20:08.114: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.4:8080/dial?request=hostName&protocol=udp&host=10.32.0.5&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-5956h PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 18 12:20:08.114: INFO: >>> kubeConfig: /root/.kube/config
Dec 18 12:20:08.738: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:20:08.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-5956h" for this suite.
Dec 18 12:20:32.980: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:20:33.091: INFO: namespace: e2e-tests-pod-network-test-5956h, resource: bindings, ignored listing per whitelist
Dec 18 12:20:33.170: INFO: namespace e2e-tests-pod-network-test-5956h deletion completed in 24.412428669s

• [SLOW TEST:61.547 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:20:33.170: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Dec 18 12:20:33.399: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-6lqzq,SelfLink:/api/v1/namespaces/e2e-tests-watch-6lqzq/configmaps/e2e-watch-test-label-changed,UID:c9d8eab9-2190-11ea-a994-fa163e34d433,ResourceVersion:15232586,Generation:0,CreationTimestamp:2019-12-18 12:20:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 18 12:20:33.399: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-6lqzq,SelfLink:/api/v1/namespaces/e2e-tests-watch-6lqzq/configmaps/e2e-watch-test-label-changed,UID:c9d8eab9-2190-11ea-a994-fa163e34d433,ResourceVersion:15232587,Generation:0,CreationTimestamp:2019-12-18 12:20:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Dec 18 12:20:33.399: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-6lqzq,SelfLink:/api/v1/namespaces/e2e-tests-watch-6lqzq/configmaps/e2e-watch-test-label-changed,UID:c9d8eab9-2190-11ea-a994-fa163e34d433,ResourceVersion:15232588,Generation:0,CreationTimestamp:2019-12-18 12:20:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Dec 18 12:20:43.561: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-6lqzq,SelfLink:/api/v1/namespaces/e2e-tests-watch-6lqzq/configmaps/e2e-watch-test-label-changed,UID:c9d8eab9-2190-11ea-a994-fa163e34d433,ResourceVersion:15232602,Generation:0,CreationTimestamp:2019-12-18 12:20:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 18 12:20:43.561: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-6lqzq,SelfLink:/api/v1/namespaces/e2e-tests-watch-6lqzq/configmaps/e2e-watch-test-label-changed,UID:c9d8eab9-2190-11ea-a994-fa163e34d433,ResourceVersion:15232603,Generation:0,CreationTimestamp:2019-12-18 12:20:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Dec 18 12:20:43.562: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-6lqzq,SelfLink:/api/v1/namespaces/e2e-tests-watch-6lqzq/configmaps/e2e-watch-test-label-changed,UID:c9d8eab9-2190-11ea-a994-fa163e34d433,ResourceVersion:15232604,Generation:0,CreationTimestamp:2019-12-18 12:20:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:20:43.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-6lqzq" for this suite.
Dec 18 12:20:49.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:20:49.744: INFO: namespace: e2e-tests-watch-6lqzq, resource: bindings, ignored listing per whitelist
Dec 18 12:20:49.922: INFO: namespace e2e-tests-watch-6lqzq deletion completed in 6.352008066s

• [SLOW TEST:16.752 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:20:49.923: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Dec 18 12:20:50.169: INFO: Waiting up to 5m0s for pod "client-containers-d3d90e8a-2190-11ea-ad77-0242ac110004" in namespace "e2e-tests-containers-kkk24" to be "success or failure"
Dec 18 12:20:50.184: INFO: Pod "client-containers-d3d90e8a-2190-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 15.032401ms
Dec 18 12:20:52.199: INFO: Pod "client-containers-d3d90e8a-2190-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029771416s
Dec 18 12:20:54.222: INFO: Pod "client-containers-d3d90e8a-2190-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053207352s
Dec 18 12:20:56.847: INFO: Pod "client-containers-d3d90e8a-2190-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.67767337s
Dec 18 12:20:58.866: INFO: Pod "client-containers-d3d90e8a-2190-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.697029356s
Dec 18 12:21:00.942: INFO: Pod "client-containers-d3d90e8a-2190-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.772647352s
STEP: Saw pod success
Dec 18 12:21:00.942: INFO: Pod "client-containers-d3d90e8a-2190-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 12:21:00.957: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-d3d90e8a-2190-11ea-ad77-0242ac110004 container test-container: 
STEP: delete the pod
Dec 18 12:21:01.277: INFO: Waiting for pod client-containers-d3d90e8a-2190-11ea-ad77-0242ac110004 to disappear
Dec 18 12:21:01.290: INFO: Pod client-containers-d3d90e8a-2190-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:21:01.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-kkk24" for this suite.
Dec 18 12:21:09.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:21:09.599: INFO: namespace: e2e-tests-containers-kkk24, resource: bindings, ignored listing per whitelist
Dec 18 12:21:09.633: INFO: namespace e2e-tests-containers-kkk24 deletion completed in 8.331916644s

• [SLOW TEST:19.710 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:21:09.633: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Dec 18 12:21:09.934: INFO: Number of nodes with available pods: 0
Dec 18 12:21:09.934: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 18 12:21:11.438: INFO: Number of nodes with available pods: 0
Dec 18 12:21:11.438: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 18 12:21:11.975: INFO: Number of nodes with available pods: 0
Dec 18 12:21:11.975: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 18 12:21:12.997: INFO: Number of nodes with available pods: 0
Dec 18 12:21:12.997: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 18 12:21:13.975: INFO: Number of nodes with available pods: 0
Dec 18 12:21:13.976: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 18 12:21:15.740: INFO: Number of nodes with available pods: 0
Dec 18 12:21:15.740: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 18 12:21:16.006: INFO: Number of nodes with available pods: 0
Dec 18 12:21:16.006: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 18 12:21:17.015: INFO: Number of nodes with available pods: 0
Dec 18 12:21:17.015: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 18 12:21:17.975: INFO: Number of nodes with available pods: 0
Dec 18 12:21:17.975: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 18 12:21:18.956: INFO: Number of nodes with available pods: 1
Dec 18 12:21:18.957: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Dec 18 12:21:19.294: INFO: Number of nodes with available pods: 0
Dec 18 12:21:19.294: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 18 12:21:20.337: INFO: Number of nodes with available pods: 0
Dec 18 12:21:20.338: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 18 12:21:21.640: INFO: Number of nodes with available pods: 0
Dec 18 12:21:21.641: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 18 12:21:22.318: INFO: Number of nodes with available pods: 0
Dec 18 12:21:22.319: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 18 12:21:23.360: INFO: Number of nodes with available pods: 0
Dec 18 12:21:23.360: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 18 12:21:24.361: INFO: Number of nodes with available pods: 0
Dec 18 12:21:24.361: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 18 12:21:25.316: INFO: Number of nodes with available pods: 0
Dec 18 12:21:25.316: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 18 12:21:26.794: INFO: Number of nodes with available pods: 0
Dec 18 12:21:26.794: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 18 12:21:27.463: INFO: Number of nodes with available pods: 0
Dec 18 12:21:27.463: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 18 12:21:28.385: INFO: Number of nodes with available pods: 0
Dec 18 12:21:28.385: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 18 12:21:29.332: INFO: Number of nodes with available pods: 1
Dec 18 12:21:29.332: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-v6bht, will wait for the garbage collector to delete the pods
Dec 18 12:21:29.450: INFO: Deleting DaemonSet.extensions daemon-set took: 47.021929ms
Dec 18 12:21:29.650: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.945994ms
Dec 18 12:21:37.351: INFO: Number of nodes with available pods: 0
Dec 18 12:21:37.351: INFO: Number of running nodes: 0, number of available pods: 0
Dec 18 12:21:37.361: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-v6bht/daemonsets","resourceVersion":"15232736"},"items":null}

Dec 18 12:21:37.371: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-v6bht/pods","resourceVersion":"15232736"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:21:37.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-v6bht" for this suite.
Dec 18 12:21:45.441: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:21:45.574: INFO: namespace: e2e-tests-daemonsets-v6bht, resource: bindings, ignored listing per whitelist
Dec 18 12:21:45.589: INFO: namespace e2e-tests-daemonsets-v6bht deletion completed in 8.18908003s

• [SLOW TEST:35.956 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:21:45.590: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-wft9t
I1218 12:21:45.863550       8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-wft9t, replica count: 1
I1218 12:21:46.914717       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1218 12:21:47.915505       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1218 12:21:48.916390       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1218 12:21:49.917067       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1218 12:21:50.918269       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1218 12:21:51.919089       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1218 12:21:52.920169       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1218 12:21:53.920855       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1218 12:21:54.921706       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1218 12:21:55.922820       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Dec 18 12:21:56.180: INFO: Created: latency-svc-hb5hl
Dec 18 12:21:56.215: INFO: Got endpoints: latency-svc-hb5hl [190.155588ms]
Dec 18 12:21:56.397: INFO: Created: latency-svc-khp2h
Dec 18 12:21:56.446: INFO: Got endpoints: latency-svc-khp2h [230.431171ms]
Dec 18 12:21:56.669: INFO: Created: latency-svc-s674n
Dec 18 12:21:56.672: INFO: Got endpoints: latency-svc-s674n [454.375048ms]
Dec 18 12:21:56.729: INFO: Created: latency-svc-tj65v
Dec 18 12:21:56.903: INFO: Got endpoints: latency-svc-tj65v [684.884004ms]
Dec 18 12:21:56.919: INFO: Created: latency-svc-m6cxq
Dec 18 12:21:56.934: INFO: Got endpoints: latency-svc-m6cxq [716.861151ms]
Dec 18 12:21:57.106: INFO: Created: latency-svc-5l9st
Dec 18 12:21:57.142: INFO: Got endpoints: latency-svc-5l9st [926.40714ms]
Dec 18 12:21:57.400: INFO: Created: latency-svc-tw7lb
Dec 18 12:21:57.426: INFO: Got endpoints: latency-svc-tw7lb [1.208816093s]
Dec 18 12:21:57.563: INFO: Created: latency-svc-5svk9
Dec 18 12:21:57.580: INFO: Got endpoints: latency-svc-5svk9 [1.364176169s]
Dec 18 12:21:57.659: INFO: Created: latency-svc-7mtks
Dec 18 12:21:57.828: INFO: Got endpoints: latency-svc-7mtks [1.611393062s]
Dec 18 12:21:57.869: INFO: Created: latency-svc-667wz
Dec 18 12:21:57.919: INFO: Got endpoints: latency-svc-667wz [1.702159s]
Dec 18 12:21:58.067: INFO: Created: latency-svc-clxtg
Dec 18 12:21:58.080: INFO: Got endpoints: latency-svc-clxtg [1.862520633s]
Dec 18 12:21:58.276: INFO: Created: latency-svc-qjp8t
Dec 18 12:21:58.313: INFO: Got endpoints: latency-svc-qjp8t [2.095878374s]
Dec 18 12:21:58.598: INFO: Created: latency-svc-pnkrh
Dec 18 12:21:58.808: INFO: Got endpoints: latency-svc-pnkrh [2.591644896s]
Dec 18 12:21:58.849: INFO: Created: latency-svc-qt97h
Dec 18 12:21:58.908: INFO: Got endpoints: latency-svc-qt97h [2.685248625s]
Dec 18 12:21:59.112: INFO: Created: latency-svc-kt6v9
Dec 18 12:21:59.122: INFO: Got endpoints: latency-svc-kt6v9 [2.90516457s]
Dec 18 12:21:59.200: INFO: Created: latency-svc-lsphg
Dec 18 12:21:59.354: INFO: Got endpoints: latency-svc-lsphg [3.136658518s]
Dec 18 12:21:59.677: INFO: Created: latency-svc-89g2h
Dec 18 12:21:59.699: INFO: Got endpoints: latency-svc-89g2h [3.252172674s]
Dec 18 12:21:59.902: INFO: Created: latency-svc-dk6nw
Dec 18 12:21:59.953: INFO: Got endpoints: latency-svc-dk6nw [3.281255598s]
Dec 18 12:21:59.994: INFO: Created: latency-svc-njtkd
Dec 18 12:22:00.101: INFO: Got endpoints: latency-svc-njtkd [3.197422154s]
Dec 18 12:22:00.131: INFO: Created: latency-svc-9g2dz
Dec 18 12:22:00.151: INFO: Got endpoints: latency-svc-9g2dz [3.217281137s]
Dec 18 12:22:00.213: INFO: Created: latency-svc-cvzwj
Dec 18 12:22:00.326: INFO: Got endpoints: latency-svc-cvzwj [3.183513685s]
Dec 18 12:22:00.355: INFO: Created: latency-svc-zdg5f
Dec 18 12:22:00.378: INFO: Got endpoints: latency-svc-zdg5f [2.952120845s]
Dec 18 12:22:00.622: INFO: Created: latency-svc-rjjp5
Dec 18 12:22:00.623: INFO: Got endpoints: latency-svc-rjjp5 [3.042757393s]
Dec 18 12:22:00.907: INFO: Created: latency-svc-rfmdg
Dec 18 12:22:01.156: INFO: Got endpoints: latency-svc-rfmdg [3.327301175s]
Dec 18 12:22:01.232: INFO: Created: latency-svc-6k7d6
Dec 18 12:22:01.233: INFO: Got endpoints: latency-svc-6k7d6 [3.312815864s]
Dec 18 12:22:01.362: INFO: Created: latency-svc-x4rkd
Dec 18 12:22:01.387: INFO: Got endpoints: latency-svc-x4rkd [3.306491422s]
Dec 18 12:22:01.644: INFO: Created: latency-svc-94hwt
Dec 18 12:22:01.694: INFO: Got endpoints: latency-svc-94hwt [3.380787247s]
Dec 18 12:22:01.888: INFO: Created: latency-svc-72976
Dec 18 12:22:01.920: INFO: Got endpoints: latency-svc-72976 [3.110829755s]
Dec 18 12:22:02.052: INFO: Created: latency-svc-hh4nv
Dec 18 12:22:02.069: INFO: Got endpoints: latency-svc-hh4nv [3.16031981s]
Dec 18 12:22:02.124: INFO: Created: latency-svc-hfdv2
Dec 18 12:22:02.395: INFO: Got endpoints: latency-svc-hfdv2 [3.273189479s]
Dec 18 12:22:02.462: INFO: Created: latency-svc-55bmj
Dec 18 12:22:02.719: INFO: Got endpoints: latency-svc-55bmj [3.364494754s]
Dec 18 12:22:02.782: INFO: Created: latency-svc-wdmr4
Dec 18 12:22:02.813: INFO: Got endpoints: latency-svc-wdmr4 [3.113868797s]
Dec 18 12:22:02.984: INFO: Created: latency-svc-m9fxw
Dec 18 12:22:03.005: INFO: Got endpoints: latency-svc-m9fxw [285.424202ms]
Dec 18 12:22:03.188: INFO: Created: latency-svc-dr7bw
Dec 18 12:22:03.203: INFO: Got endpoints: latency-svc-dr7bw [3.249891281s]
Dec 18 12:22:03.403: INFO: Created: latency-svc-lfmvx
Dec 18 12:22:03.446: INFO: Got endpoints: latency-svc-lfmvx [3.34480008s]
Dec 18 12:22:03.657: INFO: Created: latency-svc-6s8qr
Dec 18 12:22:03.659: INFO: Got endpoints: latency-svc-6s8qr [3.50744088s]
Dec 18 12:22:03.946: INFO: Created: latency-svc-hb884
Dec 18 12:22:03.970: INFO: Got endpoints: latency-svc-hb884 [3.642964559s]
Dec 18 12:22:04.130: INFO: Created: latency-svc-7wnlg
Dec 18 12:22:04.138: INFO: Got endpoints: latency-svc-7wnlg [3.759336278s]
Dec 18 12:22:04.377: INFO: Created: latency-svc-5fmqm
Dec 18 12:22:04.436: INFO: Got endpoints: latency-svc-5fmqm [3.813470806s]
Dec 18 12:22:04.570: INFO: Created: latency-svc-5h4ss
Dec 18 12:22:04.603: INFO: Got endpoints: latency-svc-5h4ss [3.446469797s]
Dec 18 12:22:04.778: INFO: Created: latency-svc-5kcbd
Dec 18 12:22:04.812: INFO: Got endpoints: latency-svc-5kcbd [3.579105859s]
Dec 18 12:22:04.817: INFO: Created: latency-svc-vtsvq
Dec 18 12:22:04.832: INFO: Got endpoints: latency-svc-vtsvq [3.445430736s]
Dec 18 12:22:05.018: INFO: Created: latency-svc-cnvwx
Dec 18 12:22:05.028: INFO: Got endpoints: latency-svc-cnvwx [3.332573816s]
Dec 18 12:22:05.202: INFO: Created: latency-svc-9tfpk
Dec 18 12:22:05.208: INFO: Got endpoints: latency-svc-9tfpk [3.288296907s]
Dec 18 12:22:05.259: INFO: Created: latency-svc-plnqq
Dec 18 12:22:05.271: INFO: Got endpoints: latency-svc-plnqq [3.202322127s]
Dec 18 12:22:05.399: INFO: Created: latency-svc-qmm4h
Dec 18 12:22:05.414: INFO: Got endpoints: latency-svc-qmm4h [3.018304683s]
Dec 18 12:22:05.476: INFO: Created: latency-svc-n6dcw
Dec 18 12:22:05.593: INFO: Got endpoints: latency-svc-n6dcw [2.778808858s]
Dec 18 12:22:05.606: INFO: Created: latency-svc-dvf2w
Dec 18 12:22:05.632: INFO: Got endpoints: latency-svc-dvf2w [2.62727967s]
Dec 18 12:22:05.682: INFO: Created: latency-svc-kf7h9
Dec 18 12:22:05.852: INFO: Got endpoints: latency-svc-kf7h9 [2.647871704s]
Dec 18 12:22:05.914: INFO: Created: latency-svc-stffq
Dec 18 12:22:05.934: INFO: Got endpoints: latency-svc-stffq [2.48783425s]
Dec 18 12:22:06.152: INFO: Created: latency-svc-44xvn
Dec 18 12:22:06.167: INFO: Got endpoints: latency-svc-44xvn [2.507877128s]
Dec 18 12:22:06.416: INFO: Created: latency-svc-hhzgx
Dec 18 12:22:06.459: INFO: Got endpoints: latency-svc-hhzgx [2.489076678s]
Dec 18 12:22:06.625: INFO: Created: latency-svc-c2n7z
Dec 18 12:22:06.661: INFO: Got endpoints: latency-svc-c2n7z [2.523066696s]
Dec 18 12:22:06.777: INFO: Created: latency-svc-gcxtv
Dec 18 12:22:06.800: INFO: Got endpoints: latency-svc-gcxtv [2.362934347s]
Dec 18 12:22:06.995: INFO: Created: latency-svc-s7prc
Dec 18 12:22:07.015: INFO: Got endpoints: latency-svc-s7prc [2.411724362s]
Dec 18 12:22:07.194: INFO: Created: latency-svc-bk6sc
Dec 18 12:22:07.195: INFO: Got endpoints: latency-svc-bk6sc [2.382079172s]
Dec 18 12:22:07.451: INFO: Created: latency-svc-nd8fb
Dec 18 12:22:07.488: INFO: Got endpoints: latency-svc-nd8fb [2.655243902s]
Dec 18 12:22:07.664: INFO: Created: latency-svc-vtwr6
Dec 18 12:22:07.883: INFO: Got endpoints: latency-svc-vtwr6 [2.855013584s]
Dec 18 12:22:08.138: INFO: Created: latency-svc-95qlp
Dec 18 12:22:08.171: INFO: Got endpoints: latency-svc-95qlp [2.962193875s]
Dec 18 12:22:08.341: INFO: Created: latency-svc-vbbml
Dec 18 12:22:08.357: INFO: Got endpoints: latency-svc-vbbml [3.08616743s]
Dec 18 12:22:08.566: INFO: Created: latency-svc-lg6pw
Dec 18 12:22:08.623: INFO: Got endpoints: latency-svc-lg6pw [3.208162715s]
Dec 18 12:22:09.336: INFO: Created: latency-svc-4bjz5
Dec 18 12:22:09.369: INFO: Got endpoints: latency-svc-4bjz5 [3.776357237s]
Dec 18 12:22:09.542: INFO: Created: latency-svc-gjlt8
Dec 18 12:22:09.546: INFO: Got endpoints: latency-svc-gjlt8 [3.913965777s]
Dec 18 12:22:09.728: INFO: Created: latency-svc-g2jmm
Dec 18 12:22:09.759: INFO: Got endpoints: latency-svc-g2jmm [3.906892437s]
Dec 18 12:22:09.919: INFO: Created: latency-svc-5zvg4
Dec 18 12:22:09.946: INFO: Got endpoints: latency-svc-5zvg4 [4.011849415s]
Dec 18 12:22:10.092: INFO: Created: latency-svc-p7xmp
Dec 18 12:22:10.121: INFO: Got endpoints: latency-svc-p7xmp [3.953008354s]
Dec 18 12:22:10.268: INFO: Created: latency-svc-xnvtf
Dec 18 12:22:10.288: INFO: Got endpoints: latency-svc-xnvtf [3.8282383s]
Dec 18 12:22:10.454: INFO: Created: latency-svc-b49dv
Dec 18 12:22:10.744: INFO: Got endpoints: latency-svc-b49dv [4.082453529s]
Dec 18 12:22:11.513: INFO: Created: latency-svc-6vsxk
Dec 18 12:22:11.513: INFO: Got endpoints: latency-svc-6vsxk [4.713007487s]
Dec 18 12:22:11.719: INFO: Created: latency-svc-m5lvn
Dec 18 12:22:11.727: INFO: Got endpoints: latency-svc-m5lvn [4.711587518s]
Dec 18 12:22:11.866: INFO: Created: latency-svc-kzkdq
Dec 18 12:22:11.884: INFO: Got endpoints: latency-svc-kzkdq [4.689376043s]
Dec 18 12:22:11.944: INFO: Created: latency-svc-j4xxm
Dec 18 12:22:12.040: INFO: Got endpoints: latency-svc-j4xxm [4.550932492s]
Dec 18 12:22:12.056: INFO: Created: latency-svc-9jn46
Dec 18 12:22:12.063: INFO: Got endpoints: latency-svc-9jn46 [4.179243275s]
Dec 18 12:22:12.154: INFO: Created: latency-svc-5d8zz
Dec 18 12:22:12.311: INFO: Got endpoints: latency-svc-5d8zz [4.139961434s]
Dec 18 12:22:12.365: INFO: Created: latency-svc-4f8tb
Dec 18 12:22:12.392: INFO: Got endpoints: latency-svc-4f8tb [4.034790538s]
Dec 18 12:22:12.547: INFO: Created: latency-svc-2glzq
Dec 18 12:22:12.555: INFO: Got endpoints: latency-svc-2glzq [3.931701211s]
Dec 18 12:22:12.733: INFO: Created: latency-svc-gzxbh
Dec 18 12:22:12.739: INFO: Got endpoints: latency-svc-gzxbh [3.36933458s]
Dec 18 12:22:12.794: INFO: Created: latency-svc-f2dhg
Dec 18 12:22:12.810: INFO: Got endpoints: latency-svc-f2dhg [3.26390458s]
Dec 18 12:22:13.009: INFO: Created: latency-svc-zb7vb
Dec 18 12:22:13.024: INFO: Got endpoints: latency-svc-zb7vb [3.264323534s]
Dec 18 12:22:13.265: INFO: Created: latency-svc-cwshs
Dec 18 12:22:13.281: INFO: Got endpoints: latency-svc-cwshs [3.334300172s]
Dec 18 12:22:13.481: INFO: Created: latency-svc-8hnlj
Dec 18 12:22:13.696: INFO: Got endpoints: latency-svc-8hnlj [3.57472564s]
Dec 18 12:22:13.699: INFO: Created: latency-svc-4nlh2
Dec 18 12:22:13.706: INFO: Got endpoints: latency-svc-4nlh2 [3.417956554s]
Dec 18 12:22:13.960: INFO: Created: latency-svc-rb2kc
Dec 18 12:22:14.004: INFO: Got endpoints: latency-svc-rb2kc [3.259273428s]
Dec 18 12:22:14.230: INFO: Created: latency-svc-ppn9w
Dec 18 12:22:14.293: INFO: Got endpoints: latency-svc-ppn9w [2.77942087s]
Dec 18 12:22:14.405: INFO: Created: latency-svc-fh6m6
Dec 18 12:22:14.440: INFO: Got endpoints: latency-svc-fh6m6 [2.71334483s]
Dec 18 12:22:14.595: INFO: Created: latency-svc-dfkp6
Dec 18 12:22:14.648: INFO: Got endpoints: latency-svc-dfkp6 [2.763706509s]
Dec 18 12:22:14.802: INFO: Created: latency-svc-rbt2c
Dec 18 12:22:14.817: INFO: Got endpoints: latency-svc-rbt2c [2.776769696s]
Dec 18 12:22:14.872: INFO: Created: latency-svc-d8z4k
Dec 18 12:22:15.019: INFO: Got endpoints: latency-svc-d8z4k [2.956121244s]
Dec 18 12:22:15.087: INFO: Created: latency-svc-g2vpf
Dec 18 12:22:15.120: INFO: Got endpoints: latency-svc-g2vpf [2.809038436s]
Dec 18 12:22:15.298: INFO: Created: latency-svc-bcw8f
Dec 18 12:22:15.373: INFO: Got endpoints: latency-svc-bcw8f [2.97983923s]
Dec 18 12:22:15.490: INFO: Created: latency-svc-xzc6p
Dec 18 12:22:15.505: INFO: Got endpoints: latency-svc-xzc6p [2.95031841s]
Dec 18 12:22:15.586: INFO: Created: latency-svc-nxnfg
Dec 18 12:22:15.680: INFO: Got endpoints: latency-svc-nxnfg [2.940637474s]
Dec 18 12:22:15.714: INFO: Created: latency-svc-7692s
Dec 18 12:22:15.728: INFO: Got endpoints: latency-svc-7692s [2.917904452s]
Dec 18 12:22:15.913: INFO: Created: latency-svc-8lfvb
Dec 18 12:22:15.934: INFO: Got endpoints: latency-svc-8lfvb [2.91046922s]
Dec 18 12:22:16.125: INFO: Created: latency-svc-9kcj2
Dec 18 12:22:16.147: INFO: Got endpoints: latency-svc-9kcj2 [2.865969925s]
Dec 18 12:22:16.219: INFO: Created: latency-svc-zwx96
Dec 18 12:22:16.355: INFO: Got endpoints: latency-svc-zwx96 [2.658740627s]
Dec 18 12:22:16.456: INFO: Created: latency-svc-7pqkd
Dec 18 12:22:16.600: INFO: Got endpoints: latency-svc-7pqkd [2.894278664s]
Dec 18 12:22:16.839: INFO: Created: latency-svc-qzr8b
Dec 18 12:22:16.918: INFO: Created: latency-svc-vxbsv
Dec 18 12:22:17.063: INFO: Got endpoints: latency-svc-qzr8b [3.058934313s]
Dec 18 12:22:17.082: INFO: Created: latency-svc-d2cbx
Dec 18 12:22:17.097: INFO: Got endpoints: latency-svc-d2cbx [2.656230588s]
Dec 18 12:22:17.133: INFO: Got endpoints: latency-svc-vxbsv [2.839604786s]
Dec 18 12:22:17.337: INFO: Created: latency-svc-4nrg6
Dec 18 12:22:17.357: INFO: Got endpoints: latency-svc-4nrg6 [2.708553521s]
Dec 18 12:22:17.536: INFO: Created: latency-svc-2fc25
Dec 18 12:22:17.583: INFO: Got endpoints: latency-svc-2fc25 [2.766353602s]
Dec 18 12:22:17.618: INFO: Created: latency-svc-hrfmd
Dec 18 12:22:17.704: INFO: Got endpoints: latency-svc-hrfmd [2.683958288s]
Dec 18 12:22:17.939: INFO: Created: latency-svc-tjpfk
Dec 18 12:22:17.949: INFO: Got endpoints: latency-svc-tjpfk [2.828693972s]
Dec 18 12:22:18.105: INFO: Created: latency-svc-tnvvj
Dec 18 12:22:18.110: INFO: Got endpoints: latency-svc-tnvvj [2.736741278s]
Dec 18 12:22:18.179: INFO: Created: latency-svc-p62vq
Dec 18 12:22:18.317: INFO: Got endpoints: latency-svc-p62vq [2.81117052s]
Dec 18 12:22:18.354: INFO: Created: latency-svc-28qzz
Dec 18 12:22:18.421: INFO: Got endpoints: latency-svc-28qzz [2.741086537s]
Dec 18 12:22:18.538: INFO: Created: latency-svc-cdqvj
Dec 18 12:22:18.573: INFO: Got endpoints: latency-svc-cdqvj [2.844023775s]
Dec 18 12:22:18.763: INFO: Created: latency-svc-xg7h4
Dec 18 12:22:18.765: INFO: Got endpoints: latency-svc-xg7h4 [2.830170336s]
Dec 18 12:22:18.834: INFO: Created: latency-svc-qh758
Dec 18 12:22:18.970: INFO: Got endpoints: latency-svc-qh758 [2.822986129s]
Dec 18 12:22:19.029: INFO: Created: latency-svc-9wv82
Dec 18 12:22:19.045: INFO: Got endpoints: latency-svc-9wv82 [2.689381529s]
Dec 18 12:22:19.181: INFO: Created: latency-svc-hsdtw
Dec 18 12:22:19.213: INFO: Got endpoints: latency-svc-hsdtw [2.611763108s]
Dec 18 12:22:19.411: INFO: Created: latency-svc-jpw4l
Dec 18 12:22:19.425: INFO: Got endpoints: latency-svc-jpw4l [2.362180136s]
Dec 18 12:22:19.570: INFO: Created: latency-svc-shpd4
Dec 18 12:22:19.586: INFO: Got endpoints: latency-svc-shpd4 [2.488727266s]
Dec 18 12:22:19.761: INFO: Created: latency-svc-6hdzv
Dec 18 12:22:19.965: INFO: Got endpoints: latency-svc-6hdzv [2.832194272s]
Dec 18 12:22:19.984: INFO: Created: latency-svc-gqpdv
Dec 18 12:22:19.986: INFO: Got endpoints: latency-svc-gqpdv [2.628761334s]
Dec 18 12:22:20.070: INFO: Created: latency-svc-tmphj
Dec 18 12:22:20.140: INFO: Got endpoints: latency-svc-tmphj [2.556251227s]
Dec 18 12:22:20.174: INFO: Created: latency-svc-gwcs7
Dec 18 12:22:20.190: INFO: Got endpoints: latency-svc-gwcs7 [2.485756666s]
Dec 18 12:22:20.371: INFO: Created: latency-svc-mnfvj
Dec 18 12:22:20.401: INFO: Got endpoints: latency-svc-mnfvj [2.451857096s]
Dec 18 12:22:20.568: INFO: Created: latency-svc-kg2mf
Dec 18 12:22:20.589: INFO: Got endpoints: latency-svc-kg2mf [2.479469032s]
Dec 18 12:22:20.730: INFO: Created: latency-svc-4whrn
Dec 18 12:22:20.737: INFO: Got endpoints: latency-svc-4whrn [2.419867062s]
Dec 18 12:22:20.813: INFO: Created: latency-svc-frfn8
Dec 18 12:22:20.896: INFO: Got endpoints: latency-svc-frfn8 [2.474721023s]
Dec 18 12:22:20.954: INFO: Created: latency-svc-xn8s2
Dec 18 12:22:20.964: INFO: Got endpoints: latency-svc-xn8s2 [2.390355382s]
Dec 18 12:22:21.109: INFO: Created: latency-svc-k5vgb
Dec 18 12:22:21.192: INFO: Created: latency-svc-ddmzx
Dec 18 12:22:21.200: INFO: Got endpoints: latency-svc-k5vgb [2.434860414s]
Dec 18 12:22:21.350: INFO: Got endpoints: latency-svc-ddmzx [2.379316441s]
Dec 18 12:22:21.372: INFO: Created: latency-svc-hr986
Dec 18 12:22:21.407: INFO: Got endpoints: latency-svc-hr986 [2.362132789s]
Dec 18 12:22:21.538: INFO: Created: latency-svc-lws4s
Dec 18 12:22:21.586: INFO: Got endpoints: latency-svc-lws4s [2.373038895s]
Dec 18 12:22:21.704: INFO: Created: latency-svc-b7r99
Dec 18 12:22:21.736: INFO: Got endpoints: latency-svc-b7r99 [2.309891039s]
Dec 18 12:22:21.775: INFO: Created: latency-svc-5f2dx
Dec 18 12:22:21.963: INFO: Created: latency-svc-pq99m
Dec 18 12:22:21.964: INFO: Got endpoints: latency-svc-5f2dx [2.377082239s]
Dec 18 12:22:21.967: INFO: Got endpoints: latency-svc-pq99m [2.001291399s]
Dec 18 12:22:22.108: INFO: Created: latency-svc-8rc64
Dec 18 12:22:22.127: INFO: Got endpoints: latency-svc-8rc64 [2.140779917s]
Dec 18 12:22:22.327: INFO: Created: latency-svc-77vd6
Dec 18 12:22:22.335: INFO: Got endpoints: latency-svc-77vd6 [2.194087494s]
Dec 18 12:22:22.565: INFO: Created: latency-svc-27cpq
Dec 18 12:22:22.607: INFO: Got endpoints: latency-svc-27cpq [2.417473166s]
Dec 18 12:22:22.685: INFO: Created: latency-svc-c5zpg
Dec 18 12:22:22.713: INFO: Got endpoints: latency-svc-c5zpg [2.311277645s]
Dec 18 12:22:22.929: INFO: Created: latency-svc-xclwh
Dec 18 12:22:22.935: INFO: Got endpoints: latency-svc-xclwh [2.345710816s]
Dec 18 12:22:23.147: INFO: Created: latency-svc-pcv9n
Dec 18 12:22:23.162: INFO: Got endpoints: latency-svc-pcv9n [2.424428014s]
Dec 18 12:22:23.333: INFO: Created: latency-svc-qq5sj
Dec 18 12:22:23.339: INFO: Got endpoints: latency-svc-qq5sj [2.441827487s]
Dec 18 12:22:23.510: INFO: Created: latency-svc-dlb9p
Dec 18 12:22:23.558: INFO: Got endpoints: latency-svc-dlb9p [2.59395313s]
Dec 18 12:22:23.702: INFO: Created: latency-svc-ftzlh
Dec 18 12:22:23.732: INFO: Got endpoints: latency-svc-ftzlh [2.531566119s]
Dec 18 12:22:23.784: INFO: Created: latency-svc-8jqr6
Dec 18 12:22:23.895: INFO: Got endpoints: latency-svc-8jqr6 [2.544110867s]
Dec 18 12:22:23.974: INFO: Created: latency-svc-7lh7f
Dec 18 12:22:24.476: INFO: Got endpoints: latency-svc-7lh7f [3.068378623s]
Dec 18 12:22:24.521: INFO: Created: latency-svc-4hhl4
Dec 18 12:22:24.684: INFO: Got endpoints: latency-svc-4hhl4 [3.096745569s]
Dec 18 12:22:24.733: INFO: Created: latency-svc-x6wl8
Dec 18 12:22:24.934: INFO: Got endpoints: latency-svc-x6wl8 [3.198144073s]
Dec 18 12:22:24.969: INFO: Created: latency-svc-njd28
Dec 18 12:22:25.178: INFO: Got endpoints: latency-svc-njd28 [3.214176576s]
Dec 18 12:22:25.192: INFO: Created: latency-svc-m6ptc
Dec 18 12:22:25.227: INFO: Got endpoints: latency-svc-m6ptc [3.260049211s]
Dec 18 12:22:25.366: INFO: Created: latency-svc-ngvzv
Dec 18 12:22:25.384: INFO: Got endpoints: latency-svc-ngvzv [3.255840392s]
Dec 18 12:22:25.453: INFO: Created: latency-svc-5xq45
Dec 18 12:22:25.557: INFO: Got endpoints: latency-svc-5xq45 [3.222172562s]
Dec 18 12:22:25.574: INFO: Created: latency-svc-zcr5j
Dec 18 12:22:25.607: INFO: Got endpoints: latency-svc-zcr5j [2.999619739s]
Dec 18 12:22:25.763: INFO: Created: latency-svc-rnjct
Dec 18 12:22:25.787: INFO: Got endpoints: latency-svc-rnjct [3.073617094s]
Dec 18 12:22:25.985: INFO: Created: latency-svc-fspjd
Dec 18 12:22:25.989: INFO: Got endpoints: latency-svc-fspjd [3.053012848s]
Dec 18 12:22:26.371: INFO: Created: latency-svc-w47bn
Dec 18 12:22:26.540: INFO: Got endpoints: latency-svc-w47bn [3.377802109s]
Dec 18 12:22:26.703: INFO: Created: latency-svc-gfq86
Dec 18 12:22:26.719: INFO: Got endpoints: latency-svc-gfq86 [3.379689159s]
Dec 18 12:22:27.245: INFO: Created: latency-svc-hvz5c
Dec 18 12:22:27.273: INFO: Got endpoints: latency-svc-hvz5c [3.714774302s]
Dec 18 12:22:27.564: INFO: Created: latency-svc-phbmd
Dec 18 12:22:27.628: INFO: Got endpoints: latency-svc-phbmd [3.896049744s]
Dec 18 12:22:27.832: INFO: Created: latency-svc-qwtg8
Dec 18 12:22:27.904: INFO: Got endpoints: latency-svc-qwtg8 [4.008327187s]
Dec 18 12:22:28.176: INFO: Created: latency-svc-snddz
Dec 18 12:22:28.373: INFO: Got endpoints: latency-svc-snddz [3.896394496s]
Dec 18 12:22:28.407: INFO: Created: latency-svc-qsg74
Dec 18 12:22:28.432: INFO: Got endpoints: latency-svc-qsg74 [3.747993116s]
Dec 18 12:22:28.664: INFO: Created: latency-svc-6vblm
Dec 18 12:22:28.790: INFO: Got endpoints: latency-svc-6vblm [3.855198271s]
Dec 18 12:22:28.832: INFO: Created: latency-svc-crfmq
Dec 18 12:22:28.960: INFO: Got endpoints: latency-svc-crfmq [3.780976114s]
Dec 18 12:22:29.055: INFO: Created: latency-svc-qpvsf
Dec 18 12:22:29.177: INFO: Got endpoints: latency-svc-qpvsf [3.95018243s]
Dec 18 12:22:29.189: INFO: Created: latency-svc-ncsts
Dec 18 12:22:29.216: INFO: Got endpoints: latency-svc-ncsts [3.832506913s]
Dec 18 12:22:29.432: INFO: Created: latency-svc-jl6rz
Dec 18 12:22:29.470: INFO: Got endpoints: latency-svc-jl6rz [3.912745044s]
Dec 18 12:22:29.678: INFO: Created: latency-svc-7hzb7
Dec 18 12:22:29.748: INFO: Got endpoints: latency-svc-7hzb7 [4.139848169s]
Dec 18 12:22:29.750: INFO: Created: latency-svc-qgmjd
Dec 18 12:22:29.865: INFO: Got endpoints: latency-svc-qgmjd [4.078085036s]
Dec 18 12:22:29.907: INFO: Created: latency-svc-qs6vd
Dec 18 12:22:29.946: INFO: Got endpoints: latency-svc-qs6vd [3.957204684s]
Dec 18 12:22:30.114: INFO: Created: latency-svc-jpbfz
Dec 18 12:22:30.141: INFO: Got endpoints: latency-svc-jpbfz [3.601044063s]
Dec 18 12:22:30.459: INFO: Created: latency-svc-rvdds
Dec 18 12:22:30.475: INFO: Got endpoints: latency-svc-rvdds [3.756058531s]
Dec 18 12:22:30.769: INFO: Created: latency-svc-ng4b4
Dec 18 12:22:30.784: INFO: Got endpoints: latency-svc-ng4b4 [3.510851475s]
Dec 18 12:22:30.961: INFO: Created: latency-svc-5st4k
Dec 18 12:22:30.972: INFO: Got endpoints: latency-svc-5st4k [3.34325498s]
Dec 18 12:22:31.154: INFO: Created: latency-svc-j8g9n
Dec 18 12:22:31.271: INFO: Got endpoints: latency-svc-j8g9n [3.366649837s]
Dec 18 12:22:31.472: INFO: Created: latency-svc-zbn67
Dec 18 12:22:31.651: INFO: Got endpoints: latency-svc-zbn67 [3.278374469s]
Dec 18 12:22:31.955: INFO: Created: latency-svc-9s4pr
Dec 18 12:22:35.213: INFO: Got endpoints: latency-svc-9s4pr [6.780448092s]
Dec 18 12:22:35.301: INFO: Created: latency-svc-spsd7
Dec 18 12:22:35.315: INFO: Got endpoints: latency-svc-spsd7 [6.524527724s]
Dec 18 12:22:35.554: INFO: Created: latency-svc-vz9zv
Dec 18 12:22:35.701: INFO: Got endpoints: latency-svc-vz9zv [6.740669743s]
Dec 18 12:22:35.899: INFO: Created: latency-svc-wglbs
Dec 18 12:22:36.105: INFO: Got endpoints: latency-svc-wglbs [6.92778211s]
Dec 18 12:22:36.156: INFO: Created: latency-svc-kt69h
Dec 18 12:22:36.166: INFO: Got endpoints: latency-svc-kt69h [6.949230389s]
Dec 18 12:22:36.396: INFO: Created: latency-svc-l7hcs
Dec 18 12:22:36.550: INFO: Got endpoints: latency-svc-l7hcs [7.079466222s]
Dec 18 12:22:36.577: INFO: Created: latency-svc-z6w6l
Dec 18 12:22:36.613: INFO: Got endpoints: latency-svc-z6w6l [6.864742751s]
Dec 18 12:22:36.840: INFO: Created: latency-svc-q7c9m
Dec 18 12:22:36.889: INFO: Got endpoints: latency-svc-q7c9m [7.022567979s]
Dec 18 12:22:37.009: INFO: Created: latency-svc-lbn8b
Dec 18 12:22:37.023: INFO: Got endpoints: latency-svc-lbn8b [7.076542864s]
Dec 18 12:22:37.074: INFO: Created: latency-svc-lfkxg
Dec 18 12:22:37.249: INFO: Got endpoints: latency-svc-lfkxg [7.10722858s]
Dec 18 12:22:37.271: INFO: Created: latency-svc-jhh9b
Dec 18 12:22:37.316: INFO: Got endpoints: latency-svc-jhh9b [6.840224139s]
Dec 18 12:22:37.428: INFO: Created: latency-svc-g8h89
Dec 18 12:22:37.439: INFO: Got endpoints: latency-svc-g8h89 [6.654583428s]
Dec 18 12:22:37.496: INFO: Created: latency-svc-64sr6
Dec 18 12:22:37.501: INFO: Got endpoints: latency-svc-64sr6 [6.529076126s]
Dec 18 12:22:37.613: INFO: Created: latency-svc-5dkq5
Dec 18 12:22:37.638: INFO: Got endpoints: latency-svc-5dkq5 [6.366983104s]
Dec 18 12:22:37.689: INFO: Created: latency-svc-zkwtw
Dec 18 12:22:37.845: INFO: Got endpoints: latency-svc-zkwtw [6.193659056s]
Dec 18 12:22:37.945: INFO: Created: latency-svc-fpm8t
Dec 18 12:22:38.137: INFO: Got endpoints: latency-svc-fpm8t [2.923322001s]
Dec 18 12:22:38.141: INFO: Created: latency-svc-bqlbm
Dec 18 12:22:38.172: INFO: Got endpoints: latency-svc-bqlbm [2.857286193s]
Dec 18 12:22:38.339: INFO: Created: latency-svc-rg5bw
Dec 18 12:22:38.353: INFO: Got endpoints: latency-svc-rg5bw [2.652025158s]
Dec 18 12:22:38.401: INFO: Created: latency-svc-cbwnc
Dec 18 12:22:38.422: INFO: Got endpoints: latency-svc-cbwnc [2.316497277s]
Dec 18 12:22:38.515: INFO: Created: latency-svc-wmmf2
Dec 18 12:22:38.537: INFO: Got endpoints: latency-svc-wmmf2 [2.371275359s]
Dec 18 12:22:38.739: INFO: Created: latency-svc-sdr6p
Dec 18 12:22:38.769: INFO: Got endpoints: latency-svc-sdr6p [2.218235788s]
Dec 18 12:22:38.855: INFO: Created: latency-svc-sqbdb
Dec 18 12:22:38.956: INFO: Created: latency-svc-zjkv2
Dec 18 12:22:38.979: INFO: Got endpoints: latency-svc-sqbdb [2.366106392s]
Dec 18 12:22:38.983: INFO: Got endpoints: latency-svc-zjkv2 [2.094446376s]
Dec 18 12:22:39.022: INFO: Created: latency-svc-wdh4g
Dec 18 12:22:39.188: INFO: Got endpoints: latency-svc-wdh4g [2.164531603s]
Dec 18 12:22:39.231: INFO: Created: latency-svc-vsrqb
Dec 18 12:22:39.279: INFO: Got endpoints: latency-svc-vsrqb [2.029816228s]
Dec 18 12:22:39.290: INFO: Created: latency-svc-57vdz
Dec 18 12:22:39.397: INFO: Got endpoints: latency-svc-57vdz [2.080850367s]
Dec 18 12:22:39.484: INFO: Created: latency-svc-g7xkl
Dec 18 12:22:41.580: INFO: Got endpoints: latency-svc-g7xkl [4.141141496s]
Dec 18 12:22:41.628: INFO: Created: latency-svc-fwxpk
Dec 18 12:22:41.825: INFO: Got endpoints: latency-svc-fwxpk [4.323476425s]
Dec 18 12:22:41.876: INFO: Created: latency-svc-l8rn5
Dec 18 12:22:42.110: INFO: Got endpoints: latency-svc-l8rn5 [4.471358909s]
Dec 18 12:22:42.137: INFO: Created: latency-svc-4rjtg
Dec 18 12:22:42.169: INFO: Got endpoints: latency-svc-4rjtg [4.322316417s]
Dec 18 12:22:42.169: INFO: Latencies: [230.431171ms 285.424202ms 454.375048ms 684.884004ms 716.861151ms 926.40714ms 1.208816093s 1.364176169s 1.611393062s 1.702159s 1.862520633s 2.001291399s 2.029816228s 2.080850367s 2.094446376s 2.095878374s 2.140779917s 2.164531603s 2.194087494s 2.218235788s 2.309891039s 2.311277645s 2.316497277s 2.345710816s 2.362132789s 2.362180136s 2.362934347s 2.366106392s 2.371275359s 2.373038895s 2.377082239s 2.379316441s 2.382079172s 2.390355382s 2.411724362s 2.417473166s 2.419867062s 2.424428014s 2.434860414s 2.441827487s 2.451857096s 2.474721023s 2.479469032s 2.485756666s 2.48783425s 2.488727266s 2.489076678s 2.507877128s 2.523066696s 2.531566119s 2.544110867s 2.556251227s 2.591644896s 2.59395313s 2.611763108s 2.62727967s 2.628761334s 2.647871704s 2.652025158s 2.655243902s 2.656230588s 2.658740627s 2.683958288s 2.685248625s 2.689381529s 2.708553521s 2.71334483s 2.736741278s 2.741086537s 2.763706509s 2.766353602s 2.776769696s 2.778808858s 2.77942087s 2.809038436s 2.81117052s 2.822986129s 2.828693972s 2.830170336s 2.832194272s 2.839604786s 2.844023775s 2.855013584s 2.857286193s 2.865969925s 2.894278664s 2.90516457s 2.91046922s 2.917904452s 2.923322001s 2.940637474s 2.95031841s 2.952120845s 2.956121244s 2.962193875s 2.97983923s 2.999619739s 3.018304683s 3.042757393s 3.053012848s 3.058934313s 3.068378623s 3.073617094s 3.08616743s 3.096745569s 3.110829755s 3.113868797s 3.136658518s 3.16031981s 3.183513685s 3.197422154s 3.198144073s 3.202322127s 3.208162715s 3.214176576s 3.217281137s 3.222172562s 3.249891281s 3.252172674s 3.255840392s 3.259273428s 3.260049211s 3.26390458s 3.264323534s 3.273189479s 3.278374469s 3.281255598s 3.288296907s 3.306491422s 3.312815864s 3.327301175s 3.332573816s 3.334300172s 3.34325498s 3.34480008s 3.364494754s 3.366649837s 3.36933458s 3.377802109s 3.379689159s 3.380787247s 3.417956554s 3.445430736s 3.446469797s 3.50744088s 3.510851475s 3.57472564s 3.579105859s 3.601044063s 3.642964559s 3.714774302s 3.747993116s 3.756058531s 3.759336278s 3.776357237s 3.780976114s 3.813470806s 3.8282383s 3.832506913s 3.855198271s 3.896049744s 3.896394496s 3.906892437s 3.912745044s 3.913965777s 3.931701211s 3.95018243s 3.953008354s 3.957204684s 4.008327187s 4.011849415s 4.034790538s 4.078085036s 4.082453529s 4.139848169s 4.139961434s 4.141141496s 4.179243275s 4.322316417s 4.323476425s 4.471358909s 4.550932492s 4.689376043s 4.711587518s 4.713007487s 6.193659056s 6.366983104s 6.524527724s 6.529076126s 6.654583428s 6.740669743s 6.780448092s 6.840224139s 6.864742751s 6.92778211s 6.949230389s 7.022567979s 7.076542864s 7.079466222s 7.10722858s]
Dec 18 12:22:42.170: INFO: 50 %ile: 3.058934313s
Dec 18 12:22:42.170: INFO: 90 %ile: 4.471358909s
Dec 18 12:22:42.170: INFO: 99 %ile: 7.079466222s
Dec 18 12:22:42.170: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:22:42.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svc-latency-wft9t" for this suite.
Dec 18 12:23:40.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:23:40.733: INFO: namespace: e2e-tests-svc-latency-wft9t, resource: bindings, ignored listing per whitelist
Dec 18 12:23:40.840: INFO: namespace e2e-tests-svc-latency-wft9t deletion completed in 58.477392227s

• [SLOW TEST:115.250 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:23:40.840: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Dec 18 12:23:41.588: INFO: Waiting up to 5m0s for pod "pod-service-account-3a05da3f-2191-11ea-ad77-0242ac110004-fh4zb" in namespace "e2e-tests-svcaccounts-h9567" to be "success or failure"
Dec 18 12:23:41.611: INFO: Pod "pod-service-account-3a05da3f-2191-11ea-ad77-0242ac110004-fh4zb": Phase="Pending", Reason="", readiness=false. Elapsed: 22.599874ms
Dec 18 12:23:43.746: INFO: Pod "pod-service-account-3a05da3f-2191-11ea-ad77-0242ac110004-fh4zb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.157839214s
Dec 18 12:23:45.764: INFO: Pod "pod-service-account-3a05da3f-2191-11ea-ad77-0242ac110004-fh4zb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.175566654s
Dec 18 12:23:47.869: INFO: Pod "pod-service-account-3a05da3f-2191-11ea-ad77-0242ac110004-fh4zb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.280399623s
Dec 18 12:23:50.420: INFO: Pod "pod-service-account-3a05da3f-2191-11ea-ad77-0242ac110004-fh4zb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.832145971s
Dec 18 12:23:52.445: INFO: Pod "pod-service-account-3a05da3f-2191-11ea-ad77-0242ac110004-fh4zb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.856809344s
Dec 18 12:23:54.475: INFO: Pod "pod-service-account-3a05da3f-2191-11ea-ad77-0242ac110004-fh4zb": Phase="Pending", Reason="", readiness=false. Elapsed: 12.886585565s
Dec 18 12:23:56.492: INFO: Pod "pod-service-account-3a05da3f-2191-11ea-ad77-0242ac110004-fh4zb": Phase="Pending", Reason="", readiness=false. Elapsed: 14.903802327s
Dec 18 12:23:58.573: INFO: Pod "pod-service-account-3a05da3f-2191-11ea-ad77-0242ac110004-fh4zb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.984728357s
STEP: Saw pod success
Dec 18 12:23:58.573: INFO: Pod "pod-service-account-3a05da3f-2191-11ea-ad77-0242ac110004-fh4zb" satisfied condition "success or failure"
Dec 18 12:23:58.582: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-3a05da3f-2191-11ea-ad77-0242ac110004-fh4zb container token-test: 
STEP: delete the pod
Dec 18 12:23:58.934: INFO: Waiting for pod pod-service-account-3a05da3f-2191-11ea-ad77-0242ac110004-fh4zb to disappear
Dec 18 12:23:58.981: INFO: Pod pod-service-account-3a05da3f-2191-11ea-ad77-0242ac110004-fh4zb no longer exists
STEP: Creating a pod to test consume service account root CA
Dec 18 12:23:59.004: INFO: Waiting up to 5m0s for pod "pod-service-account-3a05da3f-2191-11ea-ad77-0242ac110004-ns6x6" in namespace "e2e-tests-svcaccounts-h9567" to be "success or failure"
Dec 18 12:23:59.064: INFO: Pod "pod-service-account-3a05da3f-2191-11ea-ad77-0242ac110004-ns6x6": Phase="Pending", Reason="", readiness=false. Elapsed: 60.552679ms
Dec 18 12:24:01.093: INFO: Pod "pod-service-account-3a05da3f-2191-11ea-ad77-0242ac110004-ns6x6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089681011s
Dec 18 12:24:03.108: INFO: Pod "pod-service-account-3a05da3f-2191-11ea-ad77-0242ac110004-ns6x6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104007878s
Dec 18 12:24:05.163: INFO: Pod "pod-service-account-3a05da3f-2191-11ea-ad77-0242ac110004-ns6x6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.159389225s
Dec 18 12:24:07.455: INFO: Pod "pod-service-account-3a05da3f-2191-11ea-ad77-0242ac110004-ns6x6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.451680899s
Dec 18 12:24:09.569: INFO: Pod "pod-service-account-3a05da3f-2191-11ea-ad77-0242ac110004-ns6x6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.565090594s
Dec 18 12:24:11.592: INFO: Pod "pod-service-account-3a05da3f-2191-11ea-ad77-0242ac110004-ns6x6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.588483353s
Dec 18 12:24:13.628: INFO: Pod "pod-service-account-3a05da3f-2191-11ea-ad77-0242ac110004-ns6x6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.624385284s
Dec 18 12:24:15.654: INFO: Pod "pod-service-account-3a05da3f-2191-11ea-ad77-0242ac110004-ns6x6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.650296613s
STEP: Saw pod success
Dec 18 12:24:15.655: INFO: Pod "pod-service-account-3a05da3f-2191-11ea-ad77-0242ac110004-ns6x6" satisfied condition "success or failure"
Dec 18 12:24:15.663: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-3a05da3f-2191-11ea-ad77-0242ac110004-ns6x6 container root-ca-test: 
STEP: delete the pod
Dec 18 12:24:15.795: INFO: Waiting for pod pod-service-account-3a05da3f-2191-11ea-ad77-0242ac110004-ns6x6 to disappear
Dec 18 12:24:15.839: INFO: Pod pod-service-account-3a05da3f-2191-11ea-ad77-0242ac110004-ns6x6 no longer exists
STEP: Creating a pod to test consume service account namespace
Dec 18 12:24:15.881: INFO: Waiting up to 5m0s for pod "pod-service-account-3a05da3f-2191-11ea-ad77-0242ac110004-g7bvv" in namespace "e2e-tests-svcaccounts-h9567" to be "success or failure"
Dec 18 12:24:15.903: INFO: Pod "pod-service-account-3a05da3f-2191-11ea-ad77-0242ac110004-g7bvv": Phase="Pending", Reason="", readiness=false. Elapsed: 21.629713ms
Dec 18 12:24:17.936: INFO: Pod "pod-service-account-3a05da3f-2191-11ea-ad77-0242ac110004-g7bvv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054083996s
Dec 18 12:24:19.954: INFO: Pod "pod-service-account-3a05da3f-2191-11ea-ad77-0242ac110004-g7bvv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072869961s
Dec 18 12:24:22.019: INFO: Pod "pod-service-account-3a05da3f-2191-11ea-ad77-0242ac110004-g7bvv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.137772292s
Dec 18 12:24:24.052: INFO: Pod "pod-service-account-3a05da3f-2191-11ea-ad77-0242ac110004-g7bvv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.171031648s
Dec 18 12:24:26.076: INFO: Pod "pod-service-account-3a05da3f-2191-11ea-ad77-0242ac110004-g7bvv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.194470226s
Dec 18 12:24:28.189: INFO: Pod "pod-service-account-3a05da3f-2191-11ea-ad77-0242ac110004-g7bvv": Phase="Pending", Reason="", readiness=false. Elapsed: 12.307740259s
Dec 18 12:24:30.227: INFO: Pod "pod-service-account-3a05da3f-2191-11ea-ad77-0242ac110004-g7bvv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.345706406s
STEP: Saw pod success
Dec 18 12:24:30.227: INFO: Pod "pod-service-account-3a05da3f-2191-11ea-ad77-0242ac110004-g7bvv" satisfied condition "success or failure"
Dec 18 12:24:30.261: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-3a05da3f-2191-11ea-ad77-0242ac110004-g7bvv container namespace-test: 
STEP: delete the pod
Dec 18 12:24:30.498: INFO: Waiting for pod pod-service-account-3a05da3f-2191-11ea-ad77-0242ac110004-g7bvv to disappear
Dec 18 12:24:30.518: INFO: Pod pod-service-account-3a05da3f-2191-11ea-ad77-0242ac110004-g7bvv no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:24:30.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-h9567" for this suite.
Dec 18 12:24:38.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:24:38.809: INFO: namespace: e2e-tests-svcaccounts-h9567, resource: bindings, ignored listing per whitelist
Dec 18 12:24:38.984: INFO: namespace e2e-tests-svcaccounts-h9567 deletion completed in 8.447840251s

• [SLOW TEST:58.145 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:24:38.987: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Dec 18 12:24:39.285: INFO: Waiting up to 5m0s for pod "pod-5c681002-2191-11ea-ad77-0242ac110004" in namespace "e2e-tests-emptydir-2z48s" to be "success or failure"
Dec 18 12:24:39.323: INFO: Pod "pod-5c681002-2191-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 37.686103ms
Dec 18 12:24:41.583: INFO: Pod "pod-5c681002-2191-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.297152351s
Dec 18 12:24:43.611: INFO: Pod "pod-5c681002-2191-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.325744295s
Dec 18 12:24:46.009: INFO: Pod "pod-5c681002-2191-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.723474039s
Dec 18 12:24:48.224: INFO: Pod "pod-5c681002-2191-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.938587372s
Dec 18 12:24:50.259: INFO: Pod "pod-5c681002-2191-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.97339689s
STEP: Saw pod success
Dec 18 12:24:50.259: INFO: Pod "pod-5c681002-2191-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 12:24:50.271: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-5c681002-2191-11ea-ad77-0242ac110004 container test-container: 
STEP: delete the pod
Dec 18 12:24:50.368: INFO: Waiting for pod pod-5c681002-2191-11ea-ad77-0242ac110004 to disappear
Dec 18 12:24:50.474: INFO: Pod pod-5c681002-2191-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:24:50.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-2z48s" for this suite.
Dec 18 12:24:56.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:24:56.763: INFO: namespace: e2e-tests-emptydir-2z48s, resource: bindings, ignored listing per whitelist
Dec 18 12:24:56.829: INFO: namespace e2e-tests-emptydir-2z48s deletion completed in 6.339134353s

• [SLOW TEST:17.842 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:24:56.829: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 18 12:24:57.208: INFO: Waiting up to 5m0s for pod "downwardapi-volume-66ff9b5e-2191-11ea-ad77-0242ac110004" in namespace "e2e-tests-projected-t5rms" to be "success or failure"
Dec 18 12:24:57.225: INFO: Pod "downwardapi-volume-66ff9b5e-2191-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 16.835415ms
Dec 18 12:24:59.276: INFO: Pod "downwardapi-volume-66ff9b5e-2191-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068081007s
Dec 18 12:25:01.432: INFO: Pod "downwardapi-volume-66ff9b5e-2191-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.223687473s
Dec 18 12:25:03.784: INFO: Pod "downwardapi-volume-66ff9b5e-2191-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.576245369s
Dec 18 12:25:05.868: INFO: Pod "downwardapi-volume-66ff9b5e-2191-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.660374141s
Dec 18 12:25:07.901: INFO: Pod "downwardapi-volume-66ff9b5e-2191-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.693334056s
STEP: Saw pod success
Dec 18 12:25:07.901: INFO: Pod "downwardapi-volume-66ff9b5e-2191-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 12:25:07.910: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-66ff9b5e-2191-11ea-ad77-0242ac110004 container client-container: 
STEP: delete the pod
Dec 18 12:25:08.045: INFO: Waiting for pod downwardapi-volume-66ff9b5e-2191-11ea-ad77-0242ac110004 to disappear
Dec 18 12:25:08.724: INFO: Pod downwardapi-volume-66ff9b5e-2191-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:25:08.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-t5rms" for this suite.
Dec 18 12:25:15.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:25:15.165: INFO: namespace: e2e-tests-projected-t5rms, resource: bindings, ignored listing per whitelist
Dec 18 12:25:15.195: INFO: namespace e2e-tests-projected-t5rms deletion completed in 6.454563663s

• [SLOW TEST:18.365 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:25:15.195: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Dec 18 12:25:15.375: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix024148272/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:25:15.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-6vtzg" for this suite.
Dec 18 12:25:21.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:25:21.908: INFO: namespace: e2e-tests-kubectl-6vtzg, resource: bindings, ignored listing per whitelist
Dec 18 12:25:21.926: INFO: namespace e2e-tests-kubectl-6vtzg deletion completed in 6.380210458s

• [SLOW TEST:6.731 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:25:21.926: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 18 12:25:22.118: INFO: Waiting up to 5m0s for pod "downwardapi-volume-75f01918-2191-11ea-ad77-0242ac110004" in namespace "e2e-tests-projected-fhmlb" to be "success or failure"
Dec 18 12:25:22.276: INFO: Pod "downwardapi-volume-75f01918-2191-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 157.548197ms
Dec 18 12:25:24.301: INFO: Pod "downwardapi-volume-75f01918-2191-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.182582479s
Dec 18 12:25:26.327: INFO: Pod "downwardapi-volume-75f01918-2191-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.208356978s
Dec 18 12:25:28.415: INFO: Pod "downwardapi-volume-75f01918-2191-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.296933626s
Dec 18 12:25:30.439: INFO: Pod "downwardapi-volume-75f01918-2191-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.320168916s
Dec 18 12:25:32.469: INFO: Pod "downwardapi-volume-75f01918-2191-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.350291514s
STEP: Saw pod success
Dec 18 12:25:32.469: INFO: Pod "downwardapi-volume-75f01918-2191-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 12:25:32.476: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-75f01918-2191-11ea-ad77-0242ac110004 container client-container: 
STEP: delete the pod
Dec 18 12:25:32.658: INFO: Waiting for pod downwardapi-volume-75f01918-2191-11ea-ad77-0242ac110004 to disappear
Dec 18 12:25:32.678: INFO: Pod downwardapi-volume-75f01918-2191-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:25:32.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-fhmlb" for this suite.
Dec 18 12:25:38.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:25:38.838: INFO: namespace: e2e-tests-projected-fhmlb, resource: bindings, ignored listing per whitelist
Dec 18 12:25:38.880: INFO: namespace e2e-tests-projected-fhmlb deletion completed in 6.194244257s

• [SLOW TEST:16.954 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:25:38.881: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W1218 12:25:52.401449       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 18 12:25:52.401: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:25:52.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-5n96g" for this suite.
Dec 18 12:26:14.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:26:14.936: INFO: namespace: e2e-tests-gc-5n96g, resource: bindings, ignored listing per whitelist
Dec 18 12:26:14.987: INFO: namespace e2e-tests-gc-5n96g deletion completed in 22.561470991s

• [SLOW TEST:36.107 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:26:14.987: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-95a01dac-2191-11ea-ad77-0242ac110004
STEP: Creating a pod to test consume configMaps
Dec 18 12:26:15.267: INFO: Waiting up to 5m0s for pod "pod-configmaps-95a1175a-2191-11ea-ad77-0242ac110004" in namespace "e2e-tests-configmap-49dhs" to be "success or failure"
Dec 18 12:26:15.284: INFO: Pod "pod-configmaps-95a1175a-2191-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 17.066056ms
Dec 18 12:26:17.300: INFO: Pod "pod-configmaps-95a1175a-2191-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032909571s
Dec 18 12:26:19.314: INFO: Pod "pod-configmaps-95a1175a-2191-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047348323s
Dec 18 12:26:21.515: INFO: Pod "pod-configmaps-95a1175a-2191-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.248014901s
Dec 18 12:26:23.539: INFO: Pod "pod-configmaps-95a1175a-2191-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.272078798s
Dec 18 12:26:25.551: INFO: Pod "pod-configmaps-95a1175a-2191-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.283861008s
STEP: Saw pod success
Dec 18 12:26:25.551: INFO: Pod "pod-configmaps-95a1175a-2191-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 12:26:25.556: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-95a1175a-2191-11ea-ad77-0242ac110004 container configmap-volume-test: 
STEP: delete the pod
Dec 18 12:26:25.682: INFO: Waiting for pod pod-configmaps-95a1175a-2191-11ea-ad77-0242ac110004 to disappear
Dec 18 12:26:25.694: INFO: Pod pod-configmaps-95a1175a-2191-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:26:25.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-49dhs" for this suite.
Dec 18 12:26:31.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:26:31.861: INFO: namespace: e2e-tests-configmap-49dhs, resource: bindings, ignored listing per whitelist
Dec 18 12:26:31.985: INFO: namespace e2e-tests-configmap-49dhs deletion completed in 6.277614238s

• [SLOW TEST:16.997 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:26:31.986: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 18 12:26:32.446: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"9fdad10a-2191-11ea-a994-fa163e34d433", Controller:(*bool)(0xc0020e57a2), BlockOwnerDeletion:(*bool)(0xc0020e57a3)}}
Dec 18 12:26:32.485: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"9fd19fee-2191-11ea-a994-fa163e34d433", Controller:(*bool)(0xc002986842), BlockOwnerDeletion:(*bool)(0xc002986843)}}
Dec 18 12:26:32.635: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"9fd39efa-2191-11ea-a994-fa163e34d433", Controller:(*bool)(0xc0029869da), BlockOwnerDeletion:(*bool)(0xc0029869db)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:26:37.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-sswd5" for this suite.
Dec 18 12:26:43.804: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:26:44.007: INFO: namespace: e2e-tests-gc-sswd5, resource: bindings, ignored listing per whitelist
Dec 18 12:26:44.134: INFO: namespace e2e-tests-gc-sswd5 deletion completed in 6.379015561s

• [SLOW TEST:12.149 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:26:44.135: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 18 12:26:44.310: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Dec 18 12:26:44.363: INFO: Pod name sample-pod: Found 0 pods out of 1
Dec 18 12:26:49.676: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 18 12:26:53.702: INFO: Creating deployment "test-rolling-update-deployment"
Dec 18 12:26:53.722: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Dec 18 12:26:53.813: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Dec 18 12:26:55.863: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Dec 18 12:26:55.919: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712268813, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712268813, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712268814, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712268813, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 18 12:26:57.938: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712268813, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712268813, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712268814, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712268813, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 18 12:26:59.941: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712268813, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712268813, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712268814, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712268813, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 18 12:27:01.934: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 18 12:27:01.948: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-k8fzq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-k8fzq/deployments/test-rolling-update-deployment,UID:ac8c46b3-2191-11ea-a994-fa163e34d433,ResourceVersion:15234950,Generation:1,CreationTimestamp:2019-12-18 12:26:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-18 12:26:53 +0000 UTC 2019-12-18 12:26:53 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-18 12:27:01 +0000 UTC 2019-12-18 12:26:53 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Dec 18 12:27:01.953: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-k8fzq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-k8fzq/replicasets/test-rolling-update-deployment-75db98fb4c,UID:aca2a170-2191-11ea-a994-fa163e34d433,ResourceVersion:15234941,Generation:1,CreationTimestamp:2019-12-18 12:26:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment ac8c46b3-2191-11ea-a994-fa163e34d433 0xc00253e907 0xc00253e908}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 18 12:27:01.953: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Dec 18 12:27:01.953: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-k8fzq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-k8fzq/replicasets/test-rolling-update-controller,UID:a6f29c19-2191-11ea-a994-fa163e34d433,ResourceVersion:15234949,Generation:2,CreationTimestamp:2019-12-18 12:26:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment ac8c46b3-2191-11ea-a994-fa163e34d433 0xc00253e847 0xc00253e848}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 18 12:27:01.958: INFO: Pod "test-rolling-update-deployment-75db98fb4c-m2lxg" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-m2lxg,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-k8fzq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-k8fzq/pods/test-rolling-update-deployment-75db98fb4c-m2lxg,UID:aca6b0bc-2191-11ea-a994-fa163e34d433,ResourceVersion:15234940,Generation:0,CreationTimestamp:2019-12-18 12:26:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c aca2a170-2191-11ea-a994-fa163e34d433 0xc001efa3d7 0xc001efa3d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hthzl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hthzl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-hthzl true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001efa4b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001efa4d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:26:54 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:27:00 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:27:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:26:53 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2019-12-18 12:26:54 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-18 12:27:00 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://09f44cc4d14d3e1817251c51a9cc6ea81cae3259dcaa125b27023428ec186c1b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:27:01.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-k8fzq" for this suite.
Dec 18 12:27:12.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:27:12.832: INFO: namespace: e2e-tests-deployment-k8fzq, resource: bindings, ignored listing per whitelist
Dec 18 12:27:12.871: INFO: namespace e2e-tests-deployment-k8fzq deletion completed in 10.907262823s

• [SLOW TEST:28.736 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:27:12.873: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating replication controller my-hostname-basic-b810033b-2191-11ea-ad77-0242ac110004
Dec 18 12:27:13.077: INFO: Pod name my-hostname-basic-b810033b-2191-11ea-ad77-0242ac110004: Found 0 pods out of 1
Dec 18 12:27:18.090: INFO: Pod name my-hostname-basic-b810033b-2191-11ea-ad77-0242ac110004: Found 1 pods out of 1
Dec 18 12:27:18.090: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-b810033b-2191-11ea-ad77-0242ac110004" are running
Dec 18 12:27:24.123: INFO: Pod "my-hostname-basic-b810033b-2191-11ea-ad77-0242ac110004-qn2dl" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-18 12:27:13 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-18 12:27:13 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-b810033b-2191-11ea-ad77-0242ac110004]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-18 12:27:13 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-b810033b-2191-11ea-ad77-0242ac110004]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-18 12:27:13 +0000 UTC Reason: Message:}])
Dec 18 12:27:24.123: INFO: Trying to dial the pod
Dec 18 12:27:29.181: INFO: Controller my-hostname-basic-b810033b-2191-11ea-ad77-0242ac110004: Got expected result from replica 1 [my-hostname-basic-b810033b-2191-11ea-ad77-0242ac110004-qn2dl]: "my-hostname-basic-b810033b-2191-11ea-ad77-0242ac110004-qn2dl", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:27:29.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-tr2pk" for this suite.
Dec 18 12:27:35.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:27:35.392: INFO: namespace: e2e-tests-replication-controller-tr2pk, resource: bindings, ignored listing per whitelist
Dec 18 12:27:35.459: INFO: namespace e2e-tests-replication-controller-tr2pk deletion completed in 6.268270324s

• [SLOW TEST:22.587 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:27:35.459: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating server pod server in namespace e2e-tests-prestop-jxrl8
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace e2e-tests-prestop-jxrl8
STEP: Deleting pre-stop pod
Dec 18 12:28:05.005: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:28:05.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-prestop-jxrl8" for this suite.
Dec 18 12:28:52.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:28:52.303: INFO: namespace: e2e-tests-prestop-jxrl8, resource: bindings, ignored listing per whitelist
Dec 18 12:28:52.333: INFO: namespace e2e-tests-prestop-jxrl8 deletion completed in 47.272158962s

• [SLOW TEST:76.873 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:28:52.334: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 18 12:28:52.632: INFO: Creating deployment "nginx-deployment"
Dec 18 12:28:52.645: INFO: Waiting for observed generation 1
Dec 18 12:28:55.284: INFO: Waiting for all required pods to come up
Dec 18 12:28:56.161: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Dec 18 12:29:36.525: INFO: Waiting for deployment "nginx-deployment" to complete
Dec 18 12:29:36.563: INFO: Updating deployment "nginx-deployment" with a non-existent image
Dec 18 12:29:36.615: INFO: Updating deployment nginx-deployment
Dec 18 12:29:36.615: INFO: Waiting for observed generation 2
Dec 18 12:29:40.104: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Dec 18 12:29:40.334: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Dec 18 12:29:40.352: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Dec 18 12:29:41.182: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Dec 18 12:29:41.183: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Dec 18 12:29:41.210: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Dec 18 12:29:42.099: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Dec 18 12:29:42.099: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Dec 18 12:29:42.673: INFO: Updating deployment nginx-deployment
Dec 18 12:29:42.673: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Dec 18 12:29:42.956: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Dec 18 12:29:45.181: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 18 12:29:46.075: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-hmmj7,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-hmmj7/deployments/nginx-deployment,UID:f36f015a-2191-11ea-a994-fa163e34d433,ResourceVersion:15235399,Generation:3,CreationTimestamp:2019-12-18 12:28:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2019-12-18 12:29:39 +0000 UTC 2019-12-18 12:28:52 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2019-12-18 12:29:43 +0000 UTC 2019-12-18 12:29:43 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Dec 18 12:29:46.186: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-hmmj7,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-hmmj7/replicasets/nginx-deployment-5c98f8fb5,UID:0da68b34-2192-11ea-a994-fa163e34d433,ResourceVersion:15235396,Generation:3,CreationTimestamp:2019-12-18 12:29:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment f36f015a-2191-11ea-a994-fa163e34d433 0xc000f69f67 0xc000f69f68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 18 12:29:46.187: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Dec 18 12:29:46.187: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-hmmj7,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-hmmj7/replicasets/nginx-deployment-85ddf47c5d,UID:f374068c-2191-11ea-a994-fa163e34d433,ResourceVersion:15235438,Generation:3,CreationTimestamp:2019-12-18 12:28:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment f36f015a-2191-11ea-a994-fa163e34d433 0xc000cc4057 0xc000cc4058}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Dec 18 12:29:46.429: INFO: Pod "nginx-deployment-5c98f8fb5-5tfn5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-5tfn5,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-hmmj7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hmmj7/pods/nginx-deployment-5c98f8fb5-5tfn5,UID:0e3db377-2192-11ea-a994-fa163e34d433,ResourceVersion:15235390,Generation:0,CreationTimestamp:2019-12-18 12:29:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0da68b34-2192-11ea-a994-fa163e34d433 0xc001287497 0xc001287498}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ntdhj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ntdhj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ntdhj true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001287500} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001287520}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:37 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-18 12:29:39 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 12:29:46.430: INFO: Pod "nginx-deployment-5c98f8fb5-8bz2x" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-8bz2x,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-hmmj7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hmmj7/pods/nginx-deployment-5c98f8fb5-8bz2x,UID:0dc640fc-2192-11ea-a994-fa163e34d433,ResourceVersion:15235385,Generation:0,CreationTimestamp:2019-12-18 12:29:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0da68b34-2192-11ea-a994-fa163e34d433 0xc001287667 0xc001287668}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ntdhj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ntdhj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ntdhj true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0012876d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0012876f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:36 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-18 12:29:37 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 12:29:46.431: INFO: Pod "nginx-deployment-5c98f8fb5-9vlc6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-9vlc6,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-hmmj7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hmmj7/pods/nginx-deployment-5c98f8fb5-9vlc6,UID:12cc30c4-2192-11ea-a994-fa163e34d433,ResourceVersion:15235428,Generation:0,CreationTimestamp:2019-12-18 12:29:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0da68b34-2192-11ea-a994-fa163e34d433 0xc0012877b7 0xc0012877b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ntdhj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ntdhj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ntdhj true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0012878a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0012878c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:45 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 12:29:46.431: INFO: Pod "nginx-deployment-5c98f8fb5-bhlqr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-bhlqr,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-hmmj7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hmmj7/pods/nginx-deployment-5c98f8fb5-bhlqr,UID:13458ddc-2192-11ea-a994-fa163e34d433,ResourceVersion:15235450,Generation:0,CreationTimestamp:2019-12-18 12:29:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0da68b34-2192-11ea-a994-fa163e34d433 0xc001287b27 0xc001287b28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ntdhj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ntdhj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ntdhj true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001287b90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001287bc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:46 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 12:29:46.432: INFO: Pod "nginx-deployment-5c98f8fb5-cc9rx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-cc9rx,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-hmmj7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hmmj7/pods/nginx-deployment-5c98f8fb5-cc9rx,UID:1226f6a6-2192-11ea-a994-fa163e34d433,ResourceVersion:15235414,Generation:0,CreationTimestamp:2019-12-18 12:29:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0da68b34-2192-11ea-a994-fa163e34d433 0xc001287c57 0xc001287c58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ntdhj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ntdhj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ntdhj true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001287d90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001287db0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:45 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 12:29:46.433: INFO: Pod "nginx-deployment-5c98f8fb5-dhwgp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-dhwgp,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-hmmj7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hmmj7/pods/nginx-deployment-5c98f8fb5-dhwgp,UID:0e21c5c3-2192-11ea-a994-fa163e34d433,ResourceVersion:15235387,Generation:0,CreationTimestamp:2019-12-18 12:29:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0da68b34-2192-11ea-a994-fa163e34d433 0xc001287e27 0xc001287e28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ntdhj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ntdhj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ntdhj true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001287f20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001287f40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:37 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-18 12:29:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 12:29:46.433: INFO: Pod "nginx-deployment-5c98f8fb5-dlxm2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-dlxm2,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-hmmj7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hmmj7/pods/nginx-deployment-5c98f8fb5-dlxm2,UID:131fb2b8-2192-11ea-a994-fa163e34d433,ResourceVersion:15235443,Generation:0,CreationTimestamp:2019-12-18 12:29:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0da68b34-2192-11ea-a994-fa163e34d433 0xc001dfe0d7 0xc001dfe0d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ntdhj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ntdhj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ntdhj true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001dfe140} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001dfe160}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:46 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 12:29:46.435: INFO: Pod "nginx-deployment-5c98f8fb5-f8kjf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-f8kjf,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-hmmj7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hmmj7/pods/nginx-deployment-5c98f8fb5-f8kjf,UID:0dc2eadd-2192-11ea-a994-fa163e34d433,ResourceVersion:15235377,Generation:0,CreationTimestamp:2019-12-18 12:29:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0da68b34-2192-11ea-a994-fa163e34d433 0xc001dfe1d7 0xc001dfe1d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ntdhj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ntdhj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ntdhj true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001dfe250} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001dfe2f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:36 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-18 12:29:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 12:29:46.437: INFO: Pod "nginx-deployment-5c98f8fb5-hnvpr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-hnvpr,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-hmmj7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hmmj7/pods/nginx-deployment-5c98f8fb5-hnvpr,UID:131fef92-2192-11ea-a994-fa163e34d433,ResourceVersion:15235446,Generation:0,CreationTimestamp:2019-12-18 12:29:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0da68b34-2192-11ea-a994-fa163e34d433 0xc001dfe437 0xc001dfe438}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ntdhj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ntdhj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ntdhj true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001dfe4a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001dfe4c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:46 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 12:29:46.437: INFO: Pod "nginx-deployment-5c98f8fb5-nchp2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-nchp2,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-hmmj7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hmmj7/pods/nginx-deployment-5c98f8fb5-nchp2,UID:12cc10cd-2192-11ea-a994-fa163e34d433,ResourceVersion:15235433,Generation:0,CreationTimestamp:2019-12-18 12:29:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0da68b34-2192-11ea-a994-fa163e34d433 0xc001dfe947 0xc001dfe948}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ntdhj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ntdhj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ntdhj true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001dfe9b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001dfe9d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:45 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 12:29:46.438: INFO: Pod "nginx-deployment-5c98f8fb5-p2jlp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-p2jlp,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-hmmj7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hmmj7/pods/nginx-deployment-5c98f8fb5-p2jlp,UID:0dc637a1-2192-11ea-a994-fa163e34d433,ResourceVersion:15235383,Generation:0,CreationTimestamp:2019-12-18 12:29:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0da68b34-2192-11ea-a994-fa163e34d433 0xc001dfea47 0xc001dfea48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ntdhj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ntdhj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ntdhj true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001dfeab0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001dfead0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:36 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-18 12:29:37 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 12:29:46.439: INFO: Pod "nginx-deployment-5c98f8fb5-wcgg8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-wcgg8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-hmmj7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hmmj7/pods/nginx-deployment-5c98f8fb5-wcgg8,UID:131f9483-2192-11ea-a994-fa163e34d433,ResourceVersion:15235445,Generation:0,CreationTimestamp:2019-12-18 12:29:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0da68b34-2192-11ea-a994-fa163e34d433 0xc001dfece7 0xc001dfece8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ntdhj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ntdhj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ntdhj true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001dfed60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001dfed80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:46 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 12:29:46.439: INFO: Pod "nginx-deployment-5c98f8fb5-wfdmn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-wfdmn,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-hmmj7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hmmj7/pods/nginx-deployment-5c98f8fb5-wfdmn,UID:132028a8-2192-11ea-a994-fa163e34d433,ResourceVersion:15235449,Generation:0,CreationTimestamp:2019-12-18 12:29:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0da68b34-2192-11ea-a994-fa163e34d433 0xc001dff0a7 0xc001dff0a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ntdhj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ntdhj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ntdhj true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001dff180} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001dff1a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:46 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 12:29:46.440: INFO: Pod "nginx-deployment-85ddf47c5d-2pftv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2pftv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hmmj7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hmmj7/pods/nginx-deployment-85ddf47c5d-2pftv,UID:12cb79c4-2192-11ea-a994-fa163e34d433,ResourceVersion:15235431,Generation:0,CreationTimestamp:2019-12-18 12:29:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f374068c-2191-11ea-a994-fa163e34d433 0xc001dff227 0xc001dff228}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ntdhj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ntdhj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ntdhj true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001dff290} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001dff2c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:45 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 12:29:46.441: INFO: Pod "nginx-deployment-85ddf47c5d-4x4sh" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-4x4sh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hmmj7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hmmj7/pods/nginx-deployment-85ddf47c5d-4x4sh,UID:f3b1ee71-2191-11ea-a994-fa163e34d433,ResourceVersion:15235311,Generation:0,CreationTimestamp:2019-12-18 12:28:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f374068c-2191-11ea-a994-fa163e34d433 0xc001dff617 0xc001dff618}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ntdhj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ntdhj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ntdhj true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001dff680} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001dff6a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:28:56 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:28:53 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.9,StartTime:2019-12-18 12:28:56 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-18 12:29:29 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://f73b55a46ec84df4b78dc1dec91043cb43f2b1cf78f893b4913741c0fb7bdf59}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 12:29:46.442: INFO: Pod "nginx-deployment-85ddf47c5d-7x7nw" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7x7nw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hmmj7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hmmj7/pods/nginx-deployment-85ddf47c5d-7x7nw,UID:f3a7d2a2-2191-11ea-a994-fa163e34d433,ResourceVersion:15235299,Generation:0,CreationTimestamp:2019-12-18 12:28:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f374068c-2191-11ea-a994-fa163e34d433 0xc001dff887 0xc001dff888}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ntdhj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ntdhj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ntdhj true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001dff8f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001dff910}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:28:55 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:28:53 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.10,StartTime:2019-12-18 12:28:55 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-18 12:29:29 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://50055b35d5103421900ea8117d0f8e12efdb72518239c4289a019bec6de9e00f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 12:29:46.442: INFO: Pod "nginx-deployment-85ddf47c5d-8d24p" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-8d24p,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hmmj7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hmmj7/pods/nginx-deployment-85ddf47c5d-8d24p,UID:1226e9e2-2192-11ea-a994-fa163e34d433,ResourceVersion:15235452,Generation:0,CreationTimestamp:2019-12-18 12:29:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f374068c-2191-11ea-a994-fa163e34d433 0xc001dffc87 0xc001dffc88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ntdhj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ntdhj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ntdhj true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001dfff20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001dfff40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:44 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-18 12:29:45 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 12:29:46.443: INFO: Pod "nginx-deployment-85ddf47c5d-ck2pp" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ck2pp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hmmj7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hmmj7/pods/nginx-deployment-85ddf47c5d-ck2pp,UID:f3a7d37c-2191-11ea-a994-fa163e34d433,ResourceVersion:15235308,Generation:0,CreationTimestamp:2019-12-18 12:28:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f374068c-2191-11ea-a994-fa163e34d433 0xc001dffff7 0xc001dffff8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ntdhj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ntdhj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ntdhj true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001cae060} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001cae080}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:28:53 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:28:53 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.8,StartTime:2019-12-18 12:28:53 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-18 12:29:29 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://f2fd1e29398c47b2b9d73627a3ed7292e8606f5517ad3a7ca6116a8d99105688}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 12:29:46.444: INFO: Pod "nginx-deployment-85ddf47c5d-gn6pr" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gn6pr,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hmmj7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hmmj7/pods/nginx-deployment-85ddf47c5d-gn6pr,UID:f387bde1-2191-11ea-a994-fa163e34d433,ResourceVersion:15235297,Generation:0,CreationTimestamp:2019-12-18 12:28:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f374068c-2191-11ea-a994-fa163e34d433 0xc001cae147 0xc001cae148}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ntdhj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ntdhj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ntdhj true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001cae1e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001cae200}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:28:53 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:28:52 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2019-12-18 12:28:53 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-18 12:29:27 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://4af532fb4287e92abed961abc0493b66d01058623e1f54220c24491f10c9219d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 12:29:46.445: INFO: Pod "nginx-deployment-85ddf47c5d-gxm2j" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gxm2j,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hmmj7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hmmj7/pods/nginx-deployment-85ddf47c5d-gxm2j,UID:131f9e58-2192-11ea-a994-fa163e34d433,ResourceVersion:15235444,Generation:0,CreationTimestamp:2019-12-18 12:29:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f374068c-2191-11ea-a994-fa163e34d433 0xc001cae2c7 0xc001cae2c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ntdhj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ntdhj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ntdhj true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001cae3b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001cae3d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:46 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 12:29:46.445: INFO: Pod "nginx-deployment-85ddf47c5d-htn47" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-htn47,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hmmj7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hmmj7/pods/nginx-deployment-85ddf47c5d-htn47,UID:122b8bf1-2192-11ea-a994-fa163e34d433,ResourceVersion:15235409,Generation:0,CreationTimestamp:2019-12-18 12:29:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f374068c-2191-11ea-a994-fa163e34d433 0xc001cae447 0xc001cae448}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ntdhj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ntdhj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ntdhj true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001cae590} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001cae5b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:45 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 12:29:46.447: INFO: Pod "nginx-deployment-85ddf47c5d-jkctm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-jkctm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hmmj7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hmmj7/pods/nginx-deployment-85ddf47c5d-jkctm,UID:131f777d-2192-11ea-a994-fa163e34d433,ResourceVersion:15235451,Generation:0,CreationTimestamp:2019-12-18 12:29:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f374068c-2191-11ea-a994-fa163e34d433 0xc001cae627 0xc001cae628}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ntdhj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ntdhj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ntdhj true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001cae690} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001cae6b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:46 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 12:29:46.447: INFO: Pod "nginx-deployment-85ddf47c5d-l5xb6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-l5xb6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hmmj7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hmmj7/pods/nginx-deployment-85ddf47c5d-l5xb6,UID:131fe1ac-2192-11ea-a994-fa163e34d433,ResourceVersion:15235439,Generation:0,CreationTimestamp:2019-12-18 12:29:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f374068c-2191-11ea-a994-fa163e34d433 0xc001cae797 0xc001cae798}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ntdhj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ntdhj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ntdhj true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001cae810} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001cae830}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:46 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 12:29:46.448: INFO: Pod "nginx-deployment-85ddf47c5d-lj5j2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lj5j2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hmmj7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hmmj7/pods/nginx-deployment-85ddf47c5d-lj5j2,UID:131f3b27-2192-11ea-a994-fa163e34d433,ResourceVersion:15235440,Generation:0,CreationTimestamp:2019-12-18 12:29:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f374068c-2191-11ea-a994-fa163e34d433 0xc001cae8a7 0xc001cae8a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ntdhj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ntdhj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ntdhj true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001cae910} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001cae930}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:46 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 12:29:46.448: INFO: Pod "nginx-deployment-85ddf47c5d-mqkc9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-mqkc9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hmmj7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hmmj7/pods/nginx-deployment-85ddf47c5d-mqkc9,UID:12cc063b-2192-11ea-a994-fa163e34d433,ResourceVersion:15235432,Generation:0,CreationTimestamp:2019-12-18 12:29:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f374068c-2191-11ea-a994-fa163e34d433 0xc001cae9c7 0xc001cae9c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ntdhj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ntdhj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ntdhj true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001caeb50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001caeb70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:45 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 12:29:46.449: INFO: Pod "nginx-deployment-85ddf47c5d-p6fv5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-p6fv5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hmmj7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hmmj7/pods/nginx-deployment-85ddf47c5d-p6fv5,UID:12cbdb67-2192-11ea-a994-fa163e34d433,ResourceVersion:15235435,Generation:0,CreationTimestamp:2019-12-18 12:29:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f374068c-2191-11ea-a994-fa163e34d433 0xc001caebe7 0xc001caebe8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ntdhj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ntdhj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ntdhj true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001caec50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001caec70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:45 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 12:29:46.449: INFO: Pod "nginx-deployment-85ddf47c5d-q6csj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-q6csj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hmmj7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hmmj7/pods/nginx-deployment-85ddf47c5d-q6csj,UID:131f8c8b-2192-11ea-a994-fa163e34d433,ResourceVersion:15235442,Generation:0,CreationTimestamp:2019-12-18 12:29:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f374068c-2191-11ea-a994-fa163e34d433 0xc001caed47 0xc001caed48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ntdhj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ntdhj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ntdhj true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001caf190} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001caf1b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:46 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 12:29:46.450: INFO: Pod "nginx-deployment-85ddf47c5d-qhcfc" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qhcfc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hmmj7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hmmj7/pods/nginx-deployment-85ddf47c5d-qhcfc,UID:f38cb818-2191-11ea-a994-fa163e34d433,ResourceVersion:15235314,Generation:0,CreationTimestamp:2019-12-18 12:28:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f374068c-2191-11ea-a994-fa163e34d433 0xc001caf227 0xc001caf228}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ntdhj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ntdhj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ntdhj true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001caf3d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001caf3f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:28:53 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:31 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:31 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:28:53 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2019-12-18 12:28:53 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-18 12:29:27 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://b163bbb794596763c954a44c9a4e5b040ae2b980f95fb0cc2696d1ffb25b1ad9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 12:29:46.450: INFO: Pod "nginx-deployment-85ddf47c5d-s6fwl" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-s6fwl,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hmmj7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hmmj7/pods/nginx-deployment-85ddf47c5d-s6fwl,UID:f3b0d0e1-2191-11ea-a994-fa163e34d433,ResourceVersion:15235327,Generation:0,CreationTimestamp:2019-12-18 12:28:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f374068c-2191-11ea-a994-fa163e34d433 0xc001caf4b7 0xc001caf4b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ntdhj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ntdhj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ntdhj true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001caf560} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001caf580}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:28:57 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:31 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:31 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:28:53 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.11,StartTime:2019-12-18 12:28:57 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-18 12:29:29 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://50df2abfb40653057bf8c5a83dab09bf7e411dd8d00a405329f9d06e88532110}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 12:29:46.451: INFO: Pod "nginx-deployment-85ddf47c5d-w7749" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-w7749,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hmmj7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hmmj7/pods/nginx-deployment-85ddf47c5d-w7749,UID:f3a7e66d-2191-11ea-a994-fa163e34d433,ResourceVersion:15235303,Generation:0,CreationTimestamp:2019-12-18 12:28:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f374068c-2191-11ea-a994-fa163e34d433 0xc001caf737 0xc001caf738}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ntdhj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ntdhj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ntdhj true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001caf7a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001caf7c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:28:55 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:28:53 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2019-12-18 12:28:55 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-18 12:29:27 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://0f083afa8ce5a2affdead528b29714a4ee8244a4322d93a0e6bc4955bcce3111}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 12:29:46.452: INFO: Pod "nginx-deployment-85ddf47c5d-xl727" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-xl727,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hmmj7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hmmj7/pods/nginx-deployment-85ddf47c5d-xl727,UID:f38cbcc2-2191-11ea-a994-fa163e34d433,ResourceVersion:15235323,Generation:0,CreationTimestamp:2019-12-18 12:28:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f374068c-2191-11ea-a994-fa163e34d433 0xc001caf937 0xc001caf938}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ntdhj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ntdhj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ntdhj true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001caf9a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001caf9c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:28:53 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:31 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:31 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:28:52 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2019-12-18 12:28:53 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-18 12:29:22 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://2f42afd19d9ed985643946efd942999d7a0b1121c3799c1453a560ac617c89a5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 12:29:46.452: INFO: Pod "nginx-deployment-85ddf47c5d-zj2tr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zj2tr,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hmmj7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hmmj7/pods/nginx-deployment-85ddf47c5d-zj2tr,UID:122bc17a-2192-11ea-a994-fa163e34d433,ResourceVersion:15235411,Generation:0,CreationTimestamp:2019-12-18 12:29:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f374068c-2191-11ea-a994-fa163e34d433 0xc001cafb07 0xc001cafb08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ntdhj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ntdhj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ntdhj true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001cafb70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001cafb90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:45 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 12:29:46.452: INFO: Pod "nginx-deployment-85ddf47c5d-zn2r8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zn2r8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hmmj7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hmmj7/pods/nginx-deployment-85ddf47c5d-zn2r8,UID:12caa71d-2192-11ea-a994-fa163e34d433,ResourceVersion:15235420,Generation:0,CreationTimestamp:2019-12-18 12:29:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f374068c-2191-11ea-a994-fa163e34d433 0xc001cafcb7 0xc001cafcb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ntdhj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ntdhj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ntdhj true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001cafd20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001cafd40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:29:45 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:29:46.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-hmmj7" for this suite.
Dec 18 12:30:39.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:30:39.307: INFO: namespace: e2e-tests-deployment-hmmj7, resource: bindings, ignored listing per whitelist
Dec 18 12:30:40.860: INFO: namespace e2e-tests-deployment-hmmj7 deletion completed in 53.112805792s

• [SLOW TEST:108.526 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:30:40.860: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Dec 18 12:31:06.700: INFO: Successfully updated pod "pod-update-activedeadlineseconds-345b88ef-2192-11ea-ad77-0242ac110004"
Dec 18 12:31:06.700: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-345b88ef-2192-11ea-ad77-0242ac110004" in namespace "e2e-tests-pods-q65kn" to be "terminated due to deadline exceeded"
Dec 18 12:31:06.717: INFO: Pod "pod-update-activedeadlineseconds-345b88ef-2192-11ea-ad77-0242ac110004": Phase="Running", Reason="", readiness=true. Elapsed: 17.138277ms
Dec 18 12:31:09.145: INFO: Pod "pod-update-activedeadlineseconds-345b88ef-2192-11ea-ad77-0242ac110004": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.445118242s
Dec 18 12:31:09.145: INFO: Pod "pod-update-activedeadlineseconds-345b88ef-2192-11ea-ad77-0242ac110004" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:31:09.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-q65kn" for this suite.
Dec 18 12:31:17.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:31:17.500: INFO: namespace: e2e-tests-pods-q65kn, resource: bindings, ignored listing per whitelist
Dec 18 12:31:17.544: INFO: namespace e2e-tests-pods-q65kn deletion completed in 8.375501677s

• [SLOW TEST:36.684 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:31:17.544: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Dec 18 12:31:17.679: INFO: Waiting up to 5m0s for pod "client-containers-49e2055e-2192-11ea-ad77-0242ac110004" in namespace "e2e-tests-containers-9xwvm" to be "success or failure"
Dec 18 12:31:17.746: INFO: Pod "client-containers-49e2055e-2192-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 66.952502ms
Dec 18 12:31:19.868: INFO: Pod "client-containers-49e2055e-2192-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.188893129s
Dec 18 12:31:21.900: INFO: Pod "client-containers-49e2055e-2192-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.220908823s
Dec 18 12:31:24.211: INFO: Pod "client-containers-49e2055e-2192-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.531331412s
Dec 18 12:31:26.229: INFO: Pod "client-containers-49e2055e-2192-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.549662617s
Dec 18 12:31:28.247: INFO: Pod "client-containers-49e2055e-2192-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.567591729s
Dec 18 12:31:30.266: INFO: Pod "client-containers-49e2055e-2192-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.586213437s
STEP: Saw pod success
Dec 18 12:31:30.266: INFO: Pod "client-containers-49e2055e-2192-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 12:31:30.275: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-49e2055e-2192-11ea-ad77-0242ac110004 container test-container: 
STEP: delete the pod
Dec 18 12:31:30.523: INFO: Waiting for pod client-containers-49e2055e-2192-11ea-ad77-0242ac110004 to disappear
Dec 18 12:31:30.547: INFO: Pod client-containers-49e2055e-2192-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:31:30.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-9xwvm" for this suite.
Dec 18 12:31:36.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:31:36.861: INFO: namespace: e2e-tests-containers-9xwvm, resource: bindings, ignored listing per whitelist
Dec 18 12:31:36.880: INFO: namespace e2e-tests-containers-9xwvm deletion completed in 6.311595947s

• [SLOW TEST:19.336 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:31:36.881: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 18 12:31:37.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-64chw'
Dec 18 12:31:39.107: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 18 12:31:39.107: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Dec 18 12:31:39.172: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Dec 18 12:31:39.233: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Dec 18 12:31:39.314: INFO: scanned /root for discovery docs: 
Dec 18 12:31:39.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-64chw'
Dec 18 12:32:06.310: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Dec 18 12:32:06.310: INFO: stdout: "Created e2e-test-nginx-rc-0ca172697b639c8173d7d26e82bdbba7\nScaling up e2e-test-nginx-rc-0ca172697b639c8173d7d26e82bdbba7 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-0ca172697b639c8173d7d26e82bdbba7 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-0ca172697b639c8173d7d26e82bdbba7 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Dec 18 12:32:06.310: INFO: stdout: "Created e2e-test-nginx-rc-0ca172697b639c8173d7d26e82bdbba7\nScaling up e2e-test-nginx-rc-0ca172697b639c8173d7d26e82bdbba7 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-0ca172697b639c8173d7d26e82bdbba7 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-0ca172697b639c8173d7d26e82bdbba7 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Dec 18 12:32:06.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-64chw'
Dec 18 12:32:06.533: INFO: stderr: ""
Dec 18 12:32:06.534: INFO: stdout: "e2e-test-nginx-rc-0ca172697b639c8173d7d26e82bdbba7-4c8z2 e2e-test-nginx-rc-w6cjc "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Dec 18 12:32:11.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-64chw'
Dec 18 12:32:11.814: INFO: stderr: ""
Dec 18 12:32:11.814: INFO: stdout: "e2e-test-nginx-rc-0ca172697b639c8173d7d26e82bdbba7-4c8z2 e2e-test-nginx-rc-w6cjc "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Dec 18 12:32:16.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-64chw'
Dec 18 12:32:17.013: INFO: stderr: ""
Dec 18 12:32:17.013: INFO: stdout: "e2e-test-nginx-rc-0ca172697b639c8173d7d26e82bdbba7-4c8z2 "
Dec 18 12:32:17.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-0ca172697b639c8173d7d26e82bdbba7-4c8z2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-64chw'
Dec 18 12:32:17.172: INFO: stderr: ""
Dec 18 12:32:17.172: INFO: stdout: "true"
Dec 18 12:32:17.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-0ca172697b639c8173d7d26e82bdbba7-4c8z2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-64chw'
Dec 18 12:32:17.353: INFO: stderr: ""
Dec 18 12:32:17.353: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Dec 18 12:32:17.353: INFO: e2e-test-nginx-rc-0ca172697b639c8173d7d26e82bdbba7-4c8z2 is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Dec 18 12:32:17.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-64chw'
Dec 18 12:32:17.468: INFO: stderr: ""
Dec 18 12:32:17.468: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:32:17.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-64chw" for this suite.
Dec 18 12:32:41.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:32:41.701: INFO: namespace: e2e-tests-kubectl-64chw, resource: bindings, ignored listing per whitelist
Dec 18 12:32:41.712: INFO: namespace e2e-tests-kubectl-64chw deletion completed in 24.239158602s

• [SLOW TEST:64.831 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:32:41.712: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-jwt2f
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-jwt2f
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-jwt2f
Dec 18 12:32:42.092: INFO: Found 0 stateful pods, waiting for 1
Dec 18 12:32:52.105: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Dec 18 12:32:52.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 18 12:32:52.952: INFO: stderr: ""
Dec 18 12:32:52.952: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 18 12:32:52.952: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 18 12:32:52.967: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Dec 18 12:33:02.980: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 18 12:33:02.980: INFO: Waiting for statefulset status.replicas updated to 0
Dec 18 12:33:03.008: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999633s
Dec 18 12:33:04.022: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.988881643s
Dec 18 12:33:05.034: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.974615363s
Dec 18 12:33:06.062: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.962769241s
Dec 18 12:33:07.082: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.934838084s
Dec 18 12:33:08.106: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.914969849s
Dec 18 12:33:09.130: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.891094856s
Dec 18 12:33:10.151: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.866871184s
Dec 18 12:33:11.167: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.845417781s
Dec 18 12:33:12.197: INFO: Verifying statefulset ss doesn't scale past 1 for another 829.616202ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-jwt2f
Dec 18 12:33:13.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:33:13.916: INFO: stderr: ""
Dec 18 12:33:13.916: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 18 12:33:13.916: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 18 12:33:13.942: INFO: Found 1 stateful pods, waiting for 3
Dec 18 12:33:23.973: INFO: Found 2 stateful pods, waiting for 3
Dec 18 12:33:34.098: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 18 12:33:34.098: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 18 12:33:34.098: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 18 12:33:43.985: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 18 12:33:43.985: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 18 12:33:43.985: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Dec 18 12:33:44.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 18 12:33:44.912: INFO: stderr: ""
Dec 18 12:33:44.913: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 18 12:33:44.913: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 18 12:33:44.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 18 12:33:45.371: INFO: stderr: ""
Dec 18 12:33:45.371: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 18 12:33:45.371: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 18 12:33:45.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 18 12:33:46.055: INFO: stderr: ""
Dec 18 12:33:46.055: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 18 12:33:46.055: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 18 12:33:46.056: INFO: Waiting for statefulset status.replicas updated to 0
Dec 18 12:33:46.157: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 18 12:33:46.157: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Dec 18 12:33:46.157: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Dec 18 12:33:46.180: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999788s
Dec 18 12:33:47.200: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.985152531s
Dec 18 12:33:48.216: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.965554261s
Dec 18 12:33:49.240: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.949516631s
Dec 18 12:33:50.262: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.925091253s
Dec 18 12:33:51.278: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.903865027s
Dec 18 12:33:52.296: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.887660014s
Dec 18 12:33:53.309: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.869402695s
Dec 18 12:33:54.323: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.856337738s
Dec 18 12:33:55.344: INFO: Verifying statefulset ss doesn't scale past 3 for another 842.475714ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-jwt2f
Dec 18 12:33:56.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:33:57.030: INFO: stderr: ""
Dec 18 12:33:57.030: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 18 12:33:57.030: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 18 12:33:57.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:33:57.615: INFO: stderr: ""
Dec 18 12:33:57.615: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 18 12:33:57.615: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 18 12:33:57.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:33:58.212: INFO: rc: 126
Dec 18 12:33:58.212: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []   OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown
 command terminated with exit code 126
 []  0xc0013606f0 exit status 126   true [0xc0018d8118 0xc0018d8130 0xc0018d8148] [0xc0018d8118 0xc0018d8130 0xc0018d8148] [0xc0018d8128 0xc0018d8140] [0x935700 0x935700] 0xc0023d0660 }:
Command stdout:
OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown

stderr:
command terminated with exit code 126

error:
exit status 126

Dec 18 12:34:08.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:34:08.467: INFO: rc: 1
Dec 18 12:34:08.468: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc001db7c20 exit status 1   true [0xc001fe40c8 0xc001fe40e0 0xc001fe40f8] [0xc001fe40c8 0xc001fe40e0 0xc001fe40f8] [0xc001fe40d8 0xc001fe40f0] [0x935700 0x935700] 0xc00289e9c0 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Dec 18 12:34:18.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:34:18.627: INFO: rc: 1
Dec 18 12:34:18.627: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00113aff0 exit status 1   true [0xc00226a310 0xc00226a328 0xc00226a340] [0xc00226a310 0xc00226a328 0xc00226a340] [0xc00226a320 0xc00226a338] [0x935700 0x935700] 0xc001c6ed20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 18 12:34:28.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:34:28.776: INFO: rc: 1
Dec 18 12:34:28.776: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00113b140 exit status 1   true [0xc00226a348 0xc00226a360 0xc00226a378] [0xc00226a348 0xc00226a360 0xc00226a378] [0xc00226a358 0xc00226a370] [0x935700 0x935700] 0xc001c6f080 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 18 12:34:38.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:34:39.287: INFO: rc: 1
Dec 18 12:34:39.288: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001360930 exit status 1   true [0xc0018d8150 0xc0018d8168 0xc0018d8180] [0xc0018d8150 0xc0018d8168 0xc0018d8180] [0xc0018d8160 0xc0018d8178] [0x935700 0x935700] 0xc0023d0a80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 18 12:34:49.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:34:49.686: INFO: rc: 1
Dec 18 12:34:49.686: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001212090 exit status 1   true [0xc00226a380 0xc001302010 0xc001302028] [0xc00226a380 0xc001302010 0xc001302028] [0xc001302008 0xc001302020] [0x935700 0x935700] 0xc001c2cae0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 18 12:34:59.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:34:59.929: INFO: rc: 1
Dec 18 12:34:59.930: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001221680 exit status 1   true [0xc00226a000 0xc00226a018 0xc00226a030] [0xc00226a000 0xc00226a018 0xc00226a030] [0xc00226a010 0xc00226a028] [0x935700 0x935700] 0xc00238c2a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 18 12:35:09.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:35:10.056: INFO: rc: 1
Dec 18 12:35:10.056: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002470150 exit status 1   true [0xc000d32000 0xc000d32018 0xc000d32030] [0xc000d32000 0xc000d32018 0xc000d32030] [0xc000d32010 0xc000d32028] [0x935700 0x935700] 0xc00243dbc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 18 12:35:20.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:35:20.202: INFO: rc: 1
Dec 18 12:35:20.202: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002470510 exit status 1   true [0xc000d32038 0xc000d32050 0xc000d32068] [0xc000d32038 0xc000d32050 0xc000d32068] [0xc000d32048 0xc000d32060] [0x935700 0x935700] 0xc00243dec0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 18 12:35:30.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:35:30.413: INFO: rc: 1
Dec 18 12:35:30.413: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0015c62a0 exit status 1   true [0xc001302030 0xc001302048 0xc001302060] [0xc001302030 0xc001302048 0xc001302060] [0xc001302040 0xc001302058] [0x935700 0x935700] 0xc001baeea0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 18 12:35:40.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:35:40.616: INFO: rc: 1
Dec 18 12:35:40.617: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001a18150 exit status 1   true [0xc0018d8000 0xc0018d8018 0xc0018d8030] [0xc0018d8000 0xc0018d8018 0xc0018d8030] [0xc0018d8010 0xc0018d8028] [0x935700 0x935700] 0xc000db9c20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 18 12:35:50.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:35:50.739: INFO: rc: 1
Dec 18 12:35:50.739: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001221800 exit status 1   true [0xc00226a038 0xc00226a050 0xc00226a068] [0xc00226a038 0xc00226a050 0xc00226a068] [0xc00226a048 0xc00226a060] [0x935700 0x935700] 0xc00238c900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 18 12:36:00.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:36:00.932: INFO: rc: 1
Dec 18 12:36:00.933: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001a18270 exit status 1   true [0xc0018d8038 0xc0018d8050 0xc0018d8068] [0xc0018d8038 0xc0018d8050 0xc0018d8068] [0xc0018d8048 0xc0018d8060] [0x935700 0x935700] 0xc00267e2a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 18 12:36:10.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:36:11.125: INFO: rc: 1
Dec 18 12:36:11.125: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001a18390 exit status 1   true [0xc0018d8070 0xc0018d8088 0xc0018d80a0] [0xc0018d8070 0xc0018d8088 0xc0018d80a0] [0xc0018d8080 0xc0018d8098] [0x935700 0x935700] 0xc00267f020 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 18 12:36:21.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:36:21.293: INFO: rc: 1
Dec 18 12:36:21.293: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001a184b0 exit status 1   true [0xc0018d80a8 0xc0018d80c0 0xc0018d80d8] [0xc0018d80a8 0xc0018d80c0 0xc0018d80d8] [0xc0018d80b8 0xc0018d80d0] [0x935700 0x935700] 0xc00267f320 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 18 12:36:31.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:36:31.475: INFO: rc: 1
Dec 18 12:36:31.476: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001a185d0 exit status 1   true [0xc0018d80e0 0xc0018d80f8 0xc0018d8110] [0xc0018d80e0 0xc0018d80f8 0xc0018d8110] [0xc0018d80f0 0xc0018d8108] [0x935700 0x935700] 0xc00267fb00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 18 12:36:41.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:36:41.635: INFO: rc: 1
Dec 18 12:36:41.635: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0015c63f0 exit status 1   true [0xc001302068 0xc001302080 0xc001302098] [0xc001302068 0xc001302080 0xc001302098] [0xc001302078 0xc001302090] [0x935700 0x935700] 0xc001baf5c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 18 12:36:51.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:36:51.829: INFO: rc: 1
Dec 18 12:36:51.830: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001a186f0 exit status 1   true [0xc0013020a0 0xc0013020b8 0xc0018d8128] [0xc0013020a0 0xc0013020b8 0xc0018d8128] [0xc0013020b0 0xc0018d8120] [0x935700 0x935700] 0xc00267fce0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 18 12:37:01.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:37:02.043: INFO: rc: 1
Dec 18 12:37:02.043: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002470180 exit status 1   true [0xc000d32000 0xc000d32018 0xc000d32030] [0xc000d32000 0xc000d32018 0xc000d32030] [0xc000d32010 0xc000d32028] [0x935700 0x935700] 0xc00267e6c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 18 12:37:12.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:37:12.208: INFO: rc: 1
Dec 18 12:37:12.209: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0012216b0 exit status 1   true [0xc0018d8000 0xc0018d8018 0xc0018d8030] [0xc0018d8000 0xc0018d8018 0xc0018d8030] [0xc0018d8010 0xc0018d8028] [0x935700 0x935700] 0xc000db9c20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 18 12:37:22.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:37:22.343: INFO: rc: 1
Dec 18 12:37:22.343: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0015c62d0 exit status 1   true [0xc00226a000 0xc00226a018 0xc00226a030] [0xc00226a000 0xc00226a018 0xc00226a030] [0xc00226a010 0xc00226a028] [0x935700 0x935700] 0xc001baeea0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 18 12:37:32.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:37:32.538: INFO: rc: 1
Dec 18 12:37:32.538: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001a18120 exit status 1   true [0xc001302000 0xc001302018 0xc001302030] [0xc001302000 0xc001302018 0xc001302030] [0xc001302010 0xc001302028] [0x935700 0x935700] 0xc00243dbc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 18 12:37:42.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:37:42.696: INFO: rc: 1
Dec 18 12:37:42.696: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0024705a0 exit status 1   true [0xc000d32038 0xc000d32050 0xc000d32068] [0xc000d32038 0xc000d32050 0xc000d32068] [0xc000d32048 0xc000d32060] [0x935700 0x935700] 0xc00267f140 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 18 12:37:52.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:37:52.836: INFO: rc: 1
Dec 18 12:37:52.836: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001a182d0 exit status 1   true [0xc001302038 0xc001302050 0xc001302068] [0xc001302038 0xc001302050 0xc001302068] [0xc001302048 0xc001302060] [0x935700 0x935700] 0xc00243dec0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 18 12:38:02.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:38:02.965: INFO: rc: 1
Dec 18 12:38:02.965: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001a18420 exit status 1   true [0xc001302070 0xc001302088 0xc0013020c0] [0xc001302070 0xc001302088 0xc0013020c0] [0xc001302080 0xc001302098] [0x935700 0x935700] 0xc001db42a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 18 12:38:12.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:38:13.140: INFO: rc: 1
Dec 18 12:38:13.141: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0015c6480 exit status 1   true [0xc00226a038 0xc00226a050 0xc00226a068] [0xc00226a038 0xc00226a050 0xc00226a068] [0xc00226a048 0xc00226a060] [0x935700 0x935700] 0xc001baf5c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 18 12:38:23.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:38:23.303: INFO: rc: 1
Dec 18 12:38:23.303: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0015c65a0 exit status 1   true [0xc00226a070 0xc00226a088 0xc00226a0a0] [0xc00226a070 0xc00226a088 0xc00226a0a0] [0xc00226a080 0xc00226a098] [0x935700 0x935700] 0xc00238c2a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 18 12:38:33.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:38:33.489: INFO: rc: 1
Dec 18 12:38:33.489: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001a185a0 exit status 1   true [0xc0013020c8 0xc0013020e0 0xc0013020f8] [0xc0013020c8 0xc0013020e0 0xc0013020f8] [0xc0013020d8 0xc0013020f0] [0x935700 0x935700] 0xc001db45a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 18 12:38:43.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:38:43.708: INFO: rc: 1
Dec 18 12:38:43.709: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001a187e0 exit status 1   true [0xc001302100 0xc001302118 0xc001302130] [0xc001302100 0xc001302118 0xc001302130] [0xc001302110 0xc001302128] [0x935700 0x935700] 0xc001db48a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 18 12:38:53.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:38:53.929: INFO: rc: 1
Dec 18 12:38:53.929: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002470150 exit status 1   true [0xc000d32000 0xc000d32018 0xc000d32030] [0xc000d32000 0xc000d32018 0xc000d32030] [0xc000d32010 0xc000d32028] [0x935700 0x935700] 0xc00243dbc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Dec 18 12:39:03.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jwt2f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:39:04.071: INFO: rc: 1
Dec 18 12:39:04.071: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: 
Dec 18 12:39:04.072: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 18 12:39:04.110: INFO: Deleting all statefulset in ns e2e-tests-statefulset-jwt2f
Dec 18 12:39:04.126: INFO: Scaling statefulset ss to 0
Dec 18 12:39:04.280: INFO: Waiting for statefulset status.replicas updated to 0
Dec 18 12:39:04.288: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:39:04.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-jwt2f" for this suite.
Dec 18 12:39:12.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:39:12.768: INFO: namespace: e2e-tests-statefulset-jwt2f, resource: bindings, ignored listing per whitelist
Dec 18 12:39:12.792: INFO: namespace e2e-tests-statefulset-jwt2f deletion completed in 8.451190264s

• [SLOW TEST:391.080 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:39:12.792: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Dec 18 12:39:13.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ftjn8'
Dec 18 12:39:13.500: INFO: stderr: ""
Dec 18 12:39:13.500: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 18 12:39:13.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ftjn8'
Dec 18 12:39:13.711: INFO: stderr: ""
Dec 18 12:39:13.711: INFO: stdout: "update-demo-nautilus-6cljv update-demo-nautilus-s2fq7 "
Dec 18 12:39:13.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6cljv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ftjn8'
Dec 18 12:39:13.886: INFO: stderr: ""
Dec 18 12:39:13.886: INFO: stdout: ""
Dec 18 12:39:13.886: INFO: update-demo-nautilus-6cljv is created but not running
Dec 18 12:39:18.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ftjn8'
Dec 18 12:39:19.067: INFO: stderr: ""
Dec 18 12:39:19.067: INFO: stdout: "update-demo-nautilus-6cljv update-demo-nautilus-s2fq7 "
Dec 18 12:39:19.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6cljv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ftjn8'
Dec 18 12:39:19.178: INFO: stderr: ""
Dec 18 12:39:19.178: INFO: stdout: ""
Dec 18 12:39:19.178: INFO: update-demo-nautilus-6cljv is created but not running
Dec 18 12:39:24.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ftjn8'
Dec 18 12:39:24.312: INFO: stderr: ""
Dec 18 12:39:24.313: INFO: stdout: "update-demo-nautilus-6cljv update-demo-nautilus-s2fq7 "
Dec 18 12:39:24.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6cljv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ftjn8'
Dec 18 12:39:24.491: INFO: stderr: ""
Dec 18 12:39:24.491: INFO: stdout: ""
Dec 18 12:39:24.491: INFO: update-demo-nautilus-6cljv is created but not running
Dec 18 12:39:29.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ftjn8'
Dec 18 12:39:29.744: INFO: stderr: ""
Dec 18 12:39:29.744: INFO: stdout: "update-demo-nautilus-6cljv update-demo-nautilus-s2fq7 "
Dec 18 12:39:29.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6cljv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ftjn8'
Dec 18 12:39:29.901: INFO: stderr: ""
Dec 18 12:39:29.901: INFO: stdout: "true"
Dec 18 12:39:29.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6cljv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ftjn8'
Dec 18 12:39:30.050: INFO: stderr: ""
Dec 18 12:39:30.051: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 18 12:39:30.051: INFO: validating pod update-demo-nautilus-6cljv
Dec 18 12:39:30.092: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 18 12:39:30.092: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 18 12:39:30.092: INFO: update-demo-nautilus-6cljv is verified up and running
Dec 18 12:39:30.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s2fq7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ftjn8'
Dec 18 12:39:30.239: INFO: stderr: ""
Dec 18 12:39:30.239: INFO: stdout: "true"
Dec 18 12:39:30.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s2fq7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ftjn8'
Dec 18 12:39:30.397: INFO: stderr: ""
Dec 18 12:39:30.397: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 18 12:39:30.397: INFO: validating pod update-demo-nautilus-s2fq7
Dec 18 12:39:30.411: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 18 12:39:30.411: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 18 12:39:30.411: INFO: update-demo-nautilus-s2fq7 is verified up and running
STEP: using delete to clean up resources
Dec 18 12:39:30.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ftjn8'
Dec 18 12:39:30.565: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 18 12:39:30.565: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Dec 18 12:39:30.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-ftjn8'
Dec 18 12:39:30.762: INFO: stderr: "No resources found.\n"
Dec 18 12:39:30.762: INFO: stdout: ""
Dec 18 12:39:30.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-ftjn8 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 18 12:39:30.970: INFO: stderr: ""
Dec 18 12:39:30.971: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:39:30.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-ftjn8" for this suite.
Dec 18 12:39:55.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:39:55.183: INFO: namespace: e2e-tests-kubectl-ftjn8, resource: bindings, ignored listing per whitelist
Dec 18 12:39:55.235: INFO: namespace e2e-tests-kubectl-ftjn8 deletion completed in 24.234192801s

• [SLOW TEST:42.443 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:39:55.236: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 18 12:39:55.453: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:39:56.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-jw5d5" for this suite.
Dec 18 12:40:02.631: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:40:02.710: INFO: namespace: e2e-tests-custom-resource-definition-jw5d5, resource: bindings, ignored listing per whitelist
Dec 18 12:40:02.744: INFO: namespace e2e-tests-custom-resource-definition-jw5d5 deletion completed in 6.144422086s

• [SLOW TEST:7.508 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:40:02.744: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Dec 18 12:40:13.590: INFO: Successfully updated pod "annotationupdate82edf2d5-2193-11ea-ad77-0242ac110004"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:40:15.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-l7wwq" for this suite.
Dec 18 12:40:39.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:40:39.970: INFO: namespace: e2e-tests-projected-l7wwq, resource: bindings, ignored listing per whitelist
Dec 18 12:40:40.078: INFO: namespace e2e-tests-projected-l7wwq deletion completed in 24.331902565s

• [SLOW TEST:37.334 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:40:40.078: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-99336e2c-2193-11ea-ad77-0242ac110004
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-99336e2c-2193-11ea-ad77-0242ac110004
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:42:06.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-nllds" for this suite.
Dec 18 12:42:28.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:42:29.025: INFO: namespace: e2e-tests-configmap-nllds, resource: bindings, ignored listing per whitelist
Dec 18 12:42:29.050: INFO: namespace e2e-tests-configmap-nllds deletion completed in 22.217056736s

• [SLOW TEST:108.972 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:42:29.051: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Dec 18 12:42:29.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-nmmtg'
Dec 18 12:42:31.585: INFO: stderr: ""
Dec 18 12:42:31.585: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 18 12:42:31.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-nmmtg'
Dec 18 12:42:31.764: INFO: stderr: ""
Dec 18 12:42:31.764: INFO: stdout: "update-demo-nautilus-8zv7l update-demo-nautilus-wvdfn "
Dec 18 12:42:31.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8zv7l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nmmtg'
Dec 18 12:42:31.951: INFO: stderr: ""
Dec 18 12:42:31.951: INFO: stdout: ""
Dec 18 12:42:31.951: INFO: update-demo-nautilus-8zv7l is created but not running
Dec 18 12:42:36.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-nmmtg'
Dec 18 12:42:37.232: INFO: stderr: ""
Dec 18 12:42:37.232: INFO: stdout: "update-demo-nautilus-8zv7l update-demo-nautilus-wvdfn "
Dec 18 12:42:37.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8zv7l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nmmtg'
Dec 18 12:42:37.435: INFO: stderr: ""
Dec 18 12:42:37.435: INFO: stdout: ""
Dec 18 12:42:37.435: INFO: update-demo-nautilus-8zv7l is created but not running
Dec 18 12:42:42.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-nmmtg'
Dec 18 12:42:42.597: INFO: stderr: ""
Dec 18 12:42:42.598: INFO: stdout: "update-demo-nautilus-8zv7l update-demo-nautilus-wvdfn "
Dec 18 12:42:42.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8zv7l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nmmtg'
Dec 18 12:42:42.796: INFO: stderr: ""
Dec 18 12:42:42.796: INFO: stdout: ""
Dec 18 12:42:42.796: INFO: update-demo-nautilus-8zv7l is created but not running
Dec 18 12:42:47.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-nmmtg'
Dec 18 12:42:48.046: INFO: stderr: ""
Dec 18 12:42:48.046: INFO: stdout: "update-demo-nautilus-8zv7l update-demo-nautilus-wvdfn "
Dec 18 12:42:48.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8zv7l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nmmtg'
Dec 18 12:42:48.193: INFO: stderr: ""
Dec 18 12:42:48.193: INFO: stdout: "true"
Dec 18 12:42:48.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8zv7l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nmmtg'
Dec 18 12:42:48.368: INFO: stderr: ""
Dec 18 12:42:48.368: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 18 12:42:48.368: INFO: validating pod update-demo-nautilus-8zv7l
Dec 18 12:42:48.380: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 18 12:42:48.380: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 18 12:42:48.380: INFO: update-demo-nautilus-8zv7l is verified up and running
Dec 18 12:42:48.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wvdfn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nmmtg'
Dec 18 12:42:48.709: INFO: stderr: ""
Dec 18 12:42:48.709: INFO: stdout: "true"
Dec 18 12:42:48.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wvdfn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nmmtg'
Dec 18 12:42:48.883: INFO: stderr: ""
Dec 18 12:42:48.883: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 18 12:42:48.884: INFO: validating pod update-demo-nautilus-wvdfn
Dec 18 12:42:48.915: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 18 12:42:48.916: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 18 12:42:48.916: INFO: update-demo-nautilus-wvdfn is verified up and running
STEP: scaling down the replication controller
Dec 18 12:42:48.921: INFO: scanned /root for discovery docs: 
Dec 18 12:42:48.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-nmmtg'
Dec 18 12:42:50.160: INFO: stderr: ""
Dec 18 12:42:50.160: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 18 12:42:50.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-nmmtg'
Dec 18 12:42:50.421: INFO: stderr: ""
Dec 18 12:42:50.421: INFO: stdout: "update-demo-nautilus-8zv7l update-demo-nautilus-wvdfn "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 18 12:42:55.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-nmmtg'
Dec 18 12:42:55.606: INFO: stderr: ""
Dec 18 12:42:55.606: INFO: stdout: "update-demo-nautilus-8zv7l update-demo-nautilus-wvdfn "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 18 12:43:00.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-nmmtg'
Dec 18 12:43:00.792: INFO: stderr: ""
Dec 18 12:43:00.792: INFO: stdout: "update-demo-nautilus-8zv7l update-demo-nautilus-wvdfn "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 18 12:43:05.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-nmmtg'
Dec 18 12:43:05.976: INFO: stderr: ""
Dec 18 12:43:05.976: INFO: stdout: "update-demo-nautilus-wvdfn "
Dec 18 12:43:05.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wvdfn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nmmtg'
Dec 18 12:43:06.126: INFO: stderr: ""
Dec 18 12:43:06.126: INFO: stdout: "true"
Dec 18 12:43:06.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wvdfn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nmmtg'
Dec 18 12:43:06.222: INFO: stderr: ""
Dec 18 12:43:06.222: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 18 12:43:06.222: INFO: validating pod update-demo-nautilus-wvdfn
Dec 18 12:43:06.230: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 18 12:43:06.230: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 18 12:43:06.230: INFO: update-demo-nautilus-wvdfn is verified up and running
STEP: scaling up the replication controller
Dec 18 12:43:06.232: INFO: scanned /root for discovery docs: 
Dec 18 12:43:06.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-nmmtg'
Dec 18 12:43:07.562: INFO: stderr: ""
Dec 18 12:43:07.562: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 18 12:43:07.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-nmmtg'
Dec 18 12:43:07.794: INFO: stderr: ""
Dec 18 12:43:07.794: INFO: stdout: "update-demo-nautilus-sthms update-demo-nautilus-wvdfn "
Dec 18 12:43:07.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sthms -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nmmtg'
Dec 18 12:43:08.337: INFO: stderr: ""
Dec 18 12:43:08.337: INFO: stdout: ""
Dec 18 12:43:08.337: INFO: update-demo-nautilus-sthms is created but not running
Dec 18 12:43:13.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-nmmtg'
Dec 18 12:43:13.587: INFO: stderr: ""
Dec 18 12:43:13.587: INFO: stdout: "update-demo-nautilus-sthms update-demo-nautilus-wvdfn "
Dec 18 12:43:13.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sthms -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nmmtg'
Dec 18 12:43:14.026: INFO: stderr: ""
Dec 18 12:43:14.026: INFO: stdout: ""
Dec 18 12:43:14.026: INFO: update-demo-nautilus-sthms is created but not running
Dec 18 12:43:19.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-nmmtg'
Dec 18 12:43:19.231: INFO: stderr: ""
Dec 18 12:43:19.231: INFO: stdout: "update-demo-nautilus-sthms update-demo-nautilus-wvdfn "
Dec 18 12:43:19.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sthms -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nmmtg'
Dec 18 12:43:19.354: INFO: stderr: ""
Dec 18 12:43:19.354: INFO: stdout: "true"
Dec 18 12:43:19.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sthms -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nmmtg'
Dec 18 12:43:19.509: INFO: stderr: ""
Dec 18 12:43:19.509: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 18 12:43:19.509: INFO: validating pod update-demo-nautilus-sthms
Dec 18 12:43:19.521: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 18 12:43:19.521: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 18 12:43:19.521: INFO: update-demo-nautilus-sthms is verified up and running
Dec 18 12:43:19.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wvdfn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nmmtg'
Dec 18 12:43:19.612: INFO: stderr: ""
Dec 18 12:43:19.612: INFO: stdout: "true"
Dec 18 12:43:19.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wvdfn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nmmtg'
Dec 18 12:43:19.726: INFO: stderr: ""
Dec 18 12:43:19.726: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 18 12:43:19.726: INFO: validating pod update-demo-nautilus-wvdfn
Dec 18 12:43:19.741: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 18 12:43:19.741: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 18 12:43:19.741: INFO: update-demo-nautilus-wvdfn is verified up and running
STEP: using delete to clean up resources
Dec 18 12:43:19.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-nmmtg'
Dec 18 12:43:19.932: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 18 12:43:19.932: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Dec 18 12:43:19.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-nmmtg'
Dec 18 12:43:20.403: INFO: stderr: "No resources found.\n"
Dec 18 12:43:20.403: INFO: stdout: ""
Dec 18 12:43:20.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-nmmtg -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 18 12:43:20.579: INFO: stderr: ""
Dec 18 12:43:20.579: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:43:20.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-nmmtg" for this suite.
Dec 18 12:43:44.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:43:44.931: INFO: namespace: e2e-tests-kubectl-nmmtg, resource: bindings, ignored listing per whitelist
Dec 18 12:43:44.994: INFO: namespace e2e-tests-kubectl-nmmtg deletion completed in 24.390211223s

• [SLOW TEST:75.943 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:43:44.994: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-mc84x
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-mc84x
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-mc84x
Dec 18 12:43:45.372: INFO: Found 0 stateful pods, waiting for 1
Dec 18 12:43:55.397: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Dec 18 12:43:55.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 18 12:43:55.980: INFO: stderr: ""
Dec 18 12:43:55.980: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 18 12:43:55.980: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 18 12:43:56.012: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Dec 18 12:44:06.039: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 18 12:44:06.039: INFO: Waiting for statefulset status.replicas updated to 0
Dec 18 12:44:06.073: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 18 12:44:06.073: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:43:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:43:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:43:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:43:45 +0000 UTC  }]
Dec 18 12:44:06.073: INFO: 
Dec 18 12:44:06.074: INFO: StatefulSet ss has not reached scale 3, at 1
Dec 18 12:44:07.086: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.982075199s
Dec 18 12:44:08.239: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.969558046s
Dec 18 12:44:09.424: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.81609446s
Dec 18 12:44:10.525: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.630913323s
Dec 18 12:44:11.569: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.529806567s
Dec 18 12:44:12.614: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.486637215s
Dec 18 12:44:14.231: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.441018935s
Dec 18 12:44:16.005: INFO: Verifying statefulset ss doesn't scale past 3 for another 824.582257ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-mc84x
Dec 18 12:44:17.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:44:18.136: INFO: stderr: ""
Dec 18 12:44:18.136: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 18 12:44:18.136: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 18 12:44:18.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:44:18.407: INFO: rc: 1
Dec 18 12:44:18.408: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc001212270 exit status 1   true [0xc0019d2430 0xc0019d2448 0xc0019d2460] [0xc0019d2430 0xc0019d2448 0xc0019d2460] [0xc0019d2440 0xc0019d2458] [0x935700 0x935700] 0xc002437a40 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Dec 18 12:44:28.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:44:29.164: INFO: stderr: "mv: can't rename '/tmp/index.html': No such file or directory\n"
Dec 18 12:44:29.165: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 18 12:44:29.165: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 18 12:44:29.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:44:29.736: INFO: stderr: "mv: can't rename '/tmp/index.html': No such file or directory\n"
Dec 18 12:44:29.736: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 18 12:44:29.736: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 18 12:44:29.784: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 18 12:44:29.784: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 18 12:44:29.784: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Dec 18 12:44:29.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 18 12:44:30.357: INFO: stderr: ""
Dec 18 12:44:30.357: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 18 12:44:30.357: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 18 12:44:30.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 18 12:44:31.118: INFO: stderr: ""
Dec 18 12:44:31.119: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 18 12:44:31.119: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 18 12:44:31.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 18 12:44:31.856: INFO: stderr: ""
Dec 18 12:44:31.856: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 18 12:44:31.856: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 18 12:44:31.856: INFO: Waiting for statefulset status.replicas updated to 0
Dec 18 12:44:31.895: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Dec 18 12:44:41.924: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 18 12:44:41.924: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Dec 18 12:44:41.924: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Dec 18 12:44:41.969: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 18 12:44:41.969: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:43:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:43:45 +0000 UTC  }]
Dec 18 12:44:41.969: INFO: ss-1  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:06 +0000 UTC  }]
Dec 18 12:44:41.969: INFO: ss-2  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:06 +0000 UTC  }]
Dec 18 12:44:41.969: INFO: 
Dec 18 12:44:41.969: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 18 12:44:43.020: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 18 12:44:43.021: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:43:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:43:45 +0000 UTC  }]
Dec 18 12:44:43.021: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:06 +0000 UTC  }]
Dec 18 12:44:43.021: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:06 +0000 UTC  }]
Dec 18 12:44:43.021: INFO: 
Dec 18 12:44:43.021: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 18 12:44:44.436: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 18 12:44:44.436: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:43:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:43:45 +0000 UTC  }]
Dec 18 12:44:44.436: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:06 +0000 UTC  }]
Dec 18 12:44:44.436: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:06 +0000 UTC  }]
Dec 18 12:44:44.436: INFO: 
Dec 18 12:44:44.436: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 18 12:44:45.461: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 18 12:44:45.461: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:43:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:43:45 +0000 UTC  }]
Dec 18 12:44:45.461: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:06 +0000 UTC  }]
Dec 18 12:44:45.461: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:06 +0000 UTC  }]
Dec 18 12:44:45.461: INFO: 
Dec 18 12:44:45.461: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 18 12:44:46.501: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 18 12:44:46.501: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:43:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:43:45 +0000 UTC  }]
Dec 18 12:44:46.502: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:06 +0000 UTC  }]
Dec 18 12:44:46.502: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:06 +0000 UTC  }]
Dec 18 12:44:46.502: INFO: 
Dec 18 12:44:46.502: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 18 12:44:47.523: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 18 12:44:47.523: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:43:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:43:45 +0000 UTC  }]
Dec 18 12:44:47.523: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:06 +0000 UTC  }]
Dec 18 12:44:47.523: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:06 +0000 UTC  }]
Dec 18 12:44:47.524: INFO: 
Dec 18 12:44:47.524: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 18 12:44:48.559: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 18 12:44:48.560: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:43:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:43:45 +0000 UTC  }]
Dec 18 12:44:48.560: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:06 +0000 UTC  }]
Dec 18 12:44:48.560: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:06 +0000 UTC  }]
Dec 18 12:44:48.560: INFO: 
Dec 18 12:44:48.560: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 18 12:44:49.792: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 18 12:44:49.793: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:43:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:43:45 +0000 UTC  }]
Dec 18 12:44:49.793: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:06 +0000 UTC  }]
Dec 18 12:44:49.793: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:06 +0000 UTC  }]
Dec 18 12:44:49.793: INFO: 
Dec 18 12:44:49.793: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 18 12:44:50.833: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 18 12:44:50.833: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:43:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:43:45 +0000 UTC  }]
Dec 18 12:44:50.833: INFO: ss-1  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:06 +0000 UTC  }]
Dec 18 12:44:50.833: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:06 +0000 UTC  }]
Dec 18 12:44:50.833: INFO: 
Dec 18 12:44:50.833: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 18 12:44:51.879: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 18 12:44:51.880: INFO: ss-0  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:43:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:43:45 +0000 UTC  }]
Dec 18 12:44:51.880: INFO: ss-1  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:06 +0000 UTC  }]
Dec 18 12:44:51.880: INFO: ss-2  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 12:44:06 +0000 UTC  }]
Dec 18 12:44:51.880: INFO: 
Dec 18 12:44:51.880: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-mc84x
Dec 18 12:44:52.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:44:53.220: INFO: rc: 1
Dec 18 12:44:53.221: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc002543ec0 exit status 1   true [0xc0003ed338 0xc0003ed350 0xc0003ed370] [0xc0003ed338 0xc0003ed350 0xc0003ed370] [0xc0003ed348 0xc0003ed368] [0x935700 0x935700] 0xc001458300 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Dec 18 12:45:03.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:45:03.395: INFO: rc: 1
Dec 18 12:45:03.395: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001760120 exit status 1   true [0xc001302000 0xc001302018 0xc001302030] [0xc001302000 0xc001302018 0xc001302030] [0xc001302010 0xc001302028] [0x935700 0x935700] 0xc0028f48a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 18 12:45:13.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:45:13.648: INFO: rc: 1
Dec 18 12:45:13.649: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001bb2330 exit status 1   true [0xc002426000 0xc002426018 0xc002426030] [0xc002426000 0xc002426018 0xc002426030] [0xc002426010 0xc002426028] [0x935700 0x935700] 0xc001d55c80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 18 12:45:23.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:45:23.819: INFO: rc: 1
Dec 18 12:45:23.819: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0017602a0 exit status 1   true [0xc001302038 0xc001302050 0xc001302068] [0xc001302038 0xc001302050 0xc001302068] [0xc001302048 0xc001302060] [0x935700 0x935700] 0xc0028f4ba0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 18 12:45:33.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:45:33.959: INFO: rc: 1
Dec 18 12:45:33.960: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0017603f0 exit status 1   true [0xc001302070 0xc001302088 0xc0013020a0] [0xc001302070 0xc001302088 0xc0013020a0] [0xc001302080 0xc001302098] [0x935700 0x935700] 0xc0028f4ea0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 18 12:45:43.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:45:44.123: INFO: rc: 1
Dec 18 12:45:44.123: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0026c02a0 exit status 1   true [0xc0003ec028 0xc0003ec078 0xc0003ec0d0] [0xc0003ec028 0xc0003ec078 0xc0003ec0d0] [0xc0003ec068 0xc0003ec0b0] [0x935700 0x935700] 0xc001ebc2a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 18 12:45:54.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:45:54.217: INFO: rc: 1
Dec 18 12:45:54.217: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0026c0420 exit status 1   true [0xc0003ec0e8 0xc0003ec148 0xc0003ec190] [0xc0003ec0e8 0xc0003ec148 0xc0003ec190] [0xc0003ec118 0xc0003ec180] [0x935700 0x935700] 0xc001ebc780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 18 12:46:04.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:46:04.393: INFO: rc: 1
Dec 18 12:46:04.394: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0026c0570 exit status 1   true [0xc0003ec1a8 0xc0003ec1f8 0xc0003ec238] [0xc0003ec1a8 0xc0003ec1f8 0xc0003ec238] [0xc0003ec1e0 0xc0003ec210] [0x935700 0x935700] 0xc001ebcae0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 18 12:46:14.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:46:14.597: INFO: rc: 1
Dec 18 12:46:14.597: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001760510 exit status 1   true [0xc0013020a8 0xc0013020c0 0xc0013020d8] [0xc0013020a8 0xc0013020c0 0xc0013020d8] [0xc0013020b8 0xc0013020d0] [0x935700 0x935700] 0xc0028f5a40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 18 12:46:24.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:46:24.723: INFO: rc: 1
Dec 18 12:46:24.723: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001760630 exit status 1   true [0xc0013020e0 0xc0013020f8 0xc001302110] [0xc0013020e0 0xc0013020f8 0xc001302110] [0xc0013020f0 0xc001302108] [0x935700 0x935700] 0xc0028f5d40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 18 12:46:34.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:46:34.883: INFO: rc: 1
Dec 18 12:46:34.883: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0026c0750 exit status 1   true [0xc0003ec240 0xc0003ec290 0xc0003ec2e0] [0xc0003ec240 0xc0003ec290 0xc0003ec2e0] [0xc0003ec270 0xc0003ec2c0] [0x935700 0x935700] 0xc001ebd020 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 18 12:46:44.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:46:45.025: INFO: rc: 1
Dec 18 12:46:45.026: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0015c62a0 exit status 1   true [0xc0019d2000 0xc0019d2018 0xc0019d2030] [0xc0019d2000 0xc0019d2018 0xc0019d2030] [0xc0019d2010 0xc0019d2028] [0x935700 0x935700] 0xc002436240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 18 12:46:55.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:46:55.169: INFO: rc: 1
Dec 18 12:46:55.169: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0015c63f0 exit status 1   true [0xc0019d2038 0xc0019d2050 0xc0019d2068] [0xc0019d2038 0xc0019d2050 0xc0019d2068] [0xc0019d2048 0xc0019d2060] [0x935700 0x935700] 0xc002436540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 18 12:47:05.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:47:05.327: INFO: rc: 1
Dec 18 12:47:05.328: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0026c0240 exit status 1   true [0xc002426000 0xc002426018 0xc002426030] [0xc002426000 0xc002426018 0xc002426030] [0xc002426010 0xc002426028] [0x935700 0x935700] 0xc001ebc2a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 18 12:47:15.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:47:15.495: INFO: rc: 1
Dec 18 12:47:15.495: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0026c03c0 exit status 1   true [0xc002426038 0xc002426050 0xc002426068] [0xc002426038 0xc002426050 0xc002426068] [0xc002426048 0xc002426060] [0x935700 0x935700] 0xc001ebc780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 18 12:47:25.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:47:25.680: INFO: rc: 1
Dec 18 12:47:25.680: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001760180 exit status 1   true [0xc001302000 0xc001302018 0xc001302030] [0xc001302000 0xc001302018 0xc001302030] [0xc001302010 0xc001302028] [0x935700 0x935700] 0xc0028f48a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 18 12:47:35.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:47:35.802: INFO: rc: 1
Dec 18 12:47:35.802: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001bb2360 exit status 1   true [0xc0003ec028 0xc0003ec078 0xc0003ec0d0] [0xc0003ec028 0xc0003ec078 0xc0003ec0d0] [0xc0003ec068 0xc0003ec0b0] [0x935700 0x935700] 0xc001d55c80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 18 12:47:45.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:47:45.987: INFO: rc: 1
Dec 18 12:47:45.987: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0015c62d0 exit status 1   true [0xc0019d2000 0xc0019d2018 0xc0019d2030] [0xc0019d2000 0xc0019d2018 0xc0019d2030] [0xc0019d2010 0xc0019d2028] [0x935700 0x935700] 0xc002436240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 18 12:47:55.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:47:56.134: INFO: rc: 1
Dec 18 12:47:56.134: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0015c6420 exit status 1   true [0xc0019d2038 0xc0019d2050 0xc0019d2068] [0xc0019d2038 0xc0019d2050 0xc0019d2068] [0xc0019d2048 0xc0019d2060] [0x935700 0x935700] 0xc002436540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 18 12:48:06.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:48:06.327: INFO: rc: 1
Dec 18 12:48:06.327: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0015c6570 exit status 1   true [0xc0019d2070 0xc0019d2088 0xc0019d20a0] [0xc0019d2070 0xc0019d2088 0xc0019d20a0] [0xc0019d2080 0xc0019d2098] [0x935700 0x935700] 0xc002436840 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 18 12:48:16.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:48:16.531: INFO: rc: 1
Dec 18 12:48:16.532: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001760300 exit status 1   true [0xc001302038 0xc001302050 0xc001302068] [0xc001302038 0xc001302050 0xc001302068] [0xc001302048 0xc001302060] [0x935700 0x935700] 0xc0028f4ba0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 18 12:48:26.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:48:26.685: INFO: rc: 1
Dec 18 12:48:26.685: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001760450 exit status 1   true [0xc001302070 0xc001302088 0xc0013020a0] [0xc001302070 0xc001302088 0xc0013020a0] [0xc001302080 0xc001302098] [0x935700 0x935700] 0xc0028f4ea0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 18 12:48:36.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:48:36.844: INFO: rc: 1
Dec 18 12:48:36.844: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0026c0660 exit status 1   true [0xc002426070 0xc002426088 0xc0024260a0] [0xc002426070 0xc002426088 0xc0024260a0] [0xc002426080 0xc002426098] [0x935700 0x935700] 0xc001ebcae0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 18 12:48:46.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:48:47.111: INFO: rc: 1
Dec 18 12:48:47.112: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0026c07b0 exit status 1   true [0xc0024260a8 0xc0024260c0 0xc0024260d8] [0xc0024260a8 0xc0024260c0 0xc0024260d8] [0xc0024260b8 0xc0024260d0] [0x935700 0x935700] 0xc001ebd020 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 18 12:48:57.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:48:57.242: INFO: rc: 1
Dec 18 12:48:57.242: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001bb2510 exit status 1   true [0xc0003ec0e8 0xc0003ec148 0xc0003ec190] [0xc0003ec0e8 0xc0003ec148 0xc0003ec190] [0xc0003ec118 0xc0003ec180] [0x935700 0x935700] 0xc000fde300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 18 12:49:07.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:49:07.579: INFO: rc: 1
Dec 18 12:49:07.579: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0026c08d0 exit status 1   true [0xc0024260e8 0xc002426100 0xc002426118] [0xc0024260e8 0xc002426100 0xc002426118] [0xc0024260f8 0xc002426110] [0x935700 0x935700] 0xc001ebd3e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 18 12:49:17.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:49:17.731: INFO: rc: 1
Dec 18 12:49:17.731: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0015c62a0 exit status 1   true [0xc00016a000 0xc0019d2010 0xc0019d2028] [0xc00016a000 0xc0019d2010 0xc0019d2028] [0xc0019d2008 0xc0019d2020] [0x935700 0x935700] 0xc001d55c80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 18 12:49:27.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:49:27.928: INFO: rc: 1
Dec 18 12:49:27.928: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0026c0210 exit status 1   true [0xc002426000 0xc002426018 0xc002426030] [0xc002426000 0xc002426018 0xc002426030] [0xc002426010 0xc002426028] [0x935700 0x935700] 0xc002436240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 18 12:49:37.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:49:38.147: INFO: rc: 1
Dec 18 12:49:38.147: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0015c6480 exit status 1   true [0xc0019d2030 0xc0019d2048 0xc0019d2060] [0xc0019d2030 0xc0019d2048 0xc0019d2060] [0xc0019d2040 0xc0019d2058] [0x935700 0x935700] 0xc001ebc120 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 18 12:49:48.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:49:48.303: INFO: rc: 1
Dec 18 12:49:48.304: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0015c6630 exit status 1   true [0xc0019d2068 0xc0019d2080 0xc0019d2098] [0xc0019d2068 0xc0019d2080 0xc0019d2098] [0xc0019d2078 0xc0019d2090] [0x935700 0x935700] 0xc001ebc540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Dec 18 12:49:58.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-mc84x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 12:49:58.526: INFO: rc: 1
Dec 18 12:49:58.527: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Dec 18 12:49:58.527: INFO: Scaling statefulset ss to 0
Dec 18 12:49:58.587: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 18 12:49:58.596: INFO: Deleting all statefulset in ns e2e-tests-statefulset-mc84x
Dec 18 12:49:58.605: INFO: Scaling statefulset ss to 0
Dec 18 12:49:58.658: INFO: Waiting for statefulset status.replicas updated to 0
Dec 18 12:49:58.664: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:49:58.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-mc84x" for this suite.
Dec 18 12:50:06.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:50:06.890: INFO: namespace: e2e-tests-statefulset-mc84x, resource: bindings, ignored listing per whitelist
Dec 18 12:50:06.921: INFO: namespace e2e-tests-statefulset-mc84x deletion completed in 8.161864744s

• [SLOW TEST:381.927 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:50:06.921: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Dec 18 12:50:07.285: INFO: Waiting up to 5m0s for pod "var-expansion-eb2c04d3-2194-11ea-ad77-0242ac110004" in namespace "e2e-tests-var-expansion-6g95m" to be "success or failure"
Dec 18 12:50:07.313: INFO: Pod "var-expansion-eb2c04d3-2194-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 27.332775ms
Dec 18 12:50:09.326: INFO: Pod "var-expansion-eb2c04d3-2194-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040497664s
Dec 18 12:50:11.342: INFO: Pod "var-expansion-eb2c04d3-2194-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056387834s
Dec 18 12:50:13.777: INFO: Pod "var-expansion-eb2c04d3-2194-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.492010997s
Dec 18 12:50:15.799: INFO: Pod "var-expansion-eb2c04d3-2194-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.513810446s
STEP: Saw pod success
Dec 18 12:50:15.799: INFO: Pod "var-expansion-eb2c04d3-2194-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 12:50:15.806: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-eb2c04d3-2194-11ea-ad77-0242ac110004 container dapi-container: 
STEP: delete the pod
Dec 18 12:50:15.981: INFO: Waiting for pod var-expansion-eb2c04d3-2194-11ea-ad77-0242ac110004 to disappear
Dec 18 12:50:16.088: INFO: Pod var-expansion-eb2c04d3-2194-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:50:16.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-6g95m" for this suite.
Dec 18 12:50:22.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:50:22.280: INFO: namespace: e2e-tests-var-expansion-6g95m, resource: bindings, ignored listing per whitelist
Dec 18 12:50:22.347: INFO: namespace e2e-tests-var-expansion-6g95m deletion completed in 6.235965016s

• [SLOW TEST:15.426 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:50:22.348: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 18 12:50:22.640: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f4550230-2194-11ea-ad77-0242ac110004" in namespace "e2e-tests-downward-api-76jjn" to be "success or failure"
Dec 18 12:50:22.655: INFO: Pod "downwardapi-volume-f4550230-2194-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 15.174126ms
Dec 18 12:50:24.661: INFO: Pod "downwardapi-volume-f4550230-2194-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020716438s
Dec 18 12:50:26.676: INFO: Pod "downwardapi-volume-f4550230-2194-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035422846s
Dec 18 12:50:28.687: INFO: Pod "downwardapi-volume-f4550230-2194-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046790858s
Dec 18 12:50:31.119: INFO: Pod "downwardapi-volume-f4550230-2194-11ea-ad77-0242ac110004": Phase="Running", Reason="", readiness=true. Elapsed: 8.478410175s
Dec 18 12:50:33.128: INFO: Pod "downwardapi-volume-f4550230-2194-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.487468683s
STEP: Saw pod success
Dec 18 12:50:33.128: INFO: Pod "downwardapi-volume-f4550230-2194-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 12:50:33.131: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-f4550230-2194-11ea-ad77-0242ac110004 container client-container: 
STEP: delete the pod
Dec 18 12:50:33.439: INFO: Waiting for pod downwardapi-volume-f4550230-2194-11ea-ad77-0242ac110004 to disappear
Dec 18 12:50:33.739: INFO: Pod downwardapi-volume-f4550230-2194-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:50:33.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-76jjn" for this suite.
Dec 18 12:50:39.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:50:39.933: INFO: namespace: e2e-tests-downward-api-76jjn, resource: bindings, ignored listing per whitelist
Dec 18 12:50:40.065: INFO: namespace e2e-tests-downward-api-76jjn deletion completed in 6.305276869s

• [SLOW TEST:17.717 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:50:40.065: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-fed285ca-2194-11ea-ad77-0242ac110004
STEP: Creating a pod to test consume configMaps
Dec 18 12:50:40.248: INFO: Waiting up to 5m0s for pod "pod-configmaps-fed33a03-2194-11ea-ad77-0242ac110004" in namespace "e2e-tests-configmap-z6bjr" to be "success or failure"
Dec 18 12:50:40.259: INFO: Pod "pod-configmaps-fed33a03-2194-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.003944ms
Dec 18 12:50:42.373: INFO: Pod "pod-configmaps-fed33a03-2194-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125465388s
Dec 18 12:50:44.386: INFO: Pod "pod-configmaps-fed33a03-2194-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.138575356s
Dec 18 12:50:46.754: INFO: Pod "pod-configmaps-fed33a03-2194-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.50610675s
Dec 18 12:50:48.767: INFO: Pod "pod-configmaps-fed33a03-2194-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.518752559s
Dec 18 12:50:50.938: INFO: Pod "pod-configmaps-fed33a03-2194-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.690626399s
STEP: Saw pod success
Dec 18 12:50:50.939: INFO: Pod "pod-configmaps-fed33a03-2194-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 12:50:50.950: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-fed33a03-2194-11ea-ad77-0242ac110004 container configmap-volume-test: 
STEP: delete the pod
Dec 18 12:50:51.110: INFO: Waiting for pod pod-configmaps-fed33a03-2194-11ea-ad77-0242ac110004 to disappear
Dec 18 12:50:51.121: INFO: Pod pod-configmaps-fed33a03-2194-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:50:51.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-z6bjr" for this suite.
Dec 18 12:50:59.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:50:59.336: INFO: namespace: e2e-tests-configmap-z6bjr, resource: bindings, ignored listing per whitelist
Dec 18 12:50:59.341: INFO: namespace e2e-tests-configmap-z6bjr deletion completed in 8.206196691s

• [SLOW TEST:19.276 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:50:59.341: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:51:09.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-n2ndc" for this suite.
Dec 18 12:51:51.867: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:51:51.942: INFO: namespace: e2e-tests-kubelet-test-n2ndc, resource: bindings, ignored listing per whitelist
Dec 18 12:51:52.060: INFO: namespace e2e-tests-kubelet-test-n2ndc deletion completed in 42.258189441s

• [SLOW TEST:52.719 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:51:52.061: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Dec 18 12:51:52.452: INFO: Waiting up to 5m0s for pod "var-expansion-29d60938-2195-11ea-ad77-0242ac110004" in namespace "e2e-tests-var-expansion-8rtfl" to be "success or failure"
Dec 18 12:51:52.489: INFO: Pod "var-expansion-29d60938-2195-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 36.588819ms
Dec 18 12:51:54.606: INFO: Pod "var-expansion-29d60938-2195-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153908116s
Dec 18 12:51:56.617: INFO: Pod "var-expansion-29d60938-2195-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.165078862s
Dec 18 12:51:58.914: INFO: Pod "var-expansion-29d60938-2195-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.461732685s
Dec 18 12:52:00.949: INFO: Pod "var-expansion-29d60938-2195-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.496237491s
STEP: Saw pod success
Dec 18 12:52:00.949: INFO: Pod "var-expansion-29d60938-2195-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 12:52:00.967: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-29d60938-2195-11ea-ad77-0242ac110004 container dapi-container: 
STEP: delete the pod
Dec 18 12:52:01.406: INFO: Waiting for pod var-expansion-29d60938-2195-11ea-ad77-0242ac110004 to disappear
Dec 18 12:52:01.428: INFO: Pod var-expansion-29d60938-2195-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:52:01.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-8rtfl" for this suite.
Dec 18 12:52:07.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:52:07.638: INFO: namespace: e2e-tests-var-expansion-8rtfl, resource: bindings, ignored listing per whitelist
Dec 18 12:52:07.765: INFO: namespace e2e-tests-var-expansion-8rtfl deletion completed in 6.284192238s

• [SLOW TEST:15.704 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:52:07.765: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W1218 12:52:53.251761       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 18 12:52:53.252: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:52:53.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-vb2bz" for this suite.
Dec 18 12:53:03.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:53:03.935: INFO: namespace: e2e-tests-gc-vb2bz, resource: bindings, ignored listing per whitelist
Dec 18 12:53:04.093: INFO: namespace e2e-tests-gc-vb2bz deletion completed in 10.828253443s

• [SLOW TEST:56.328 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:53:04.095: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-5505d1f3-2195-11ea-ad77-0242ac110004
STEP: Creating a pod to test consume configMaps
Dec 18 12:53:05.041: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-551e6be3-2195-11ea-ad77-0242ac110004" in namespace "e2e-tests-projected-cxgv2" to be "success or failure"
Dec 18 12:53:05.072: INFO: Pod "pod-projected-configmaps-551e6be3-2195-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 30.41098ms
Dec 18 12:53:09.132: INFO: Pod "pod-projected-configmaps-551e6be3-2195-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090742089s
Dec 18 12:53:11.150: INFO: Pod "pod-projected-configmaps-551e6be3-2195-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108859284s
Dec 18 12:53:13.205: INFO: Pod "pod-projected-configmaps-551e6be3-2195-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.163508692s
Dec 18 12:53:15.217: INFO: Pod "pod-projected-configmaps-551e6be3-2195-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.175714019s
Dec 18 12:53:17.234: INFO: Pod "pod-projected-configmaps-551e6be3-2195-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 12.192958338s
Dec 18 12:53:19.356: INFO: Pod "pod-projected-configmaps-551e6be3-2195-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 14.314667474s
Dec 18 12:53:21.385: INFO: Pod "pod-projected-configmaps-551e6be3-2195-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 16.343667286s
Dec 18 12:53:23.406: INFO: Pod "pod-projected-configmaps-551e6be3-2195-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.364876376s
STEP: Saw pod success
Dec 18 12:53:23.406: INFO: Pod "pod-projected-configmaps-551e6be3-2195-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 12:53:23.413: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-551e6be3-2195-11ea-ad77-0242ac110004 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 18 12:53:23.664: INFO: Waiting for pod pod-projected-configmaps-551e6be3-2195-11ea-ad77-0242ac110004 to disappear
Dec 18 12:53:23.689: INFO: Pod pod-projected-configmaps-551e6be3-2195-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:53:23.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-cxgv2" for this suite.
Dec 18 12:53:29.763: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:53:30.029: INFO: namespace: e2e-tests-projected-cxgv2, resource: bindings, ignored listing per whitelist
Dec 18 12:53:30.043: INFO: namespace e2e-tests-projected-cxgv2 deletion completed in 6.34149013s

• [SLOW TEST:25.948 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:53:30.044: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Dec 18 12:53:30.210: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:53:30.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-9srjt" for this suite.
Dec 18 12:53:36.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:53:36.643: INFO: namespace: e2e-tests-kubectl-9srjt, resource: bindings, ignored listing per whitelist
Dec 18 12:53:36.656: INFO: namespace e2e-tests-kubectl-9srjt deletion completed in 6.304865027s

• [SLOW TEST:6.612 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:53:36.657: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 18 12:53:36.919: INFO: Waiting up to 5m0s for pod "downwardapi-volume-68217f35-2195-11ea-ad77-0242ac110004" in namespace "e2e-tests-downward-api-lkrsw" to be "success or failure"
Dec 18 12:53:36.929: INFO: Pod "downwardapi-volume-68217f35-2195-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.86224ms
Dec 18 12:53:39.114: INFO: Pod "downwardapi-volume-68217f35-2195-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194212887s
Dec 18 12:53:41.154: INFO: Pod "downwardapi-volume-68217f35-2195-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.234413453s
Dec 18 12:53:43.171: INFO: Pod "downwardapi-volume-68217f35-2195-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.251271419s
Dec 18 12:53:45.199: INFO: Pod "downwardapi-volume-68217f35-2195-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.27989682s
Dec 18 12:53:47.269: INFO: Pod "downwardapi-volume-68217f35-2195-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.349414742s
STEP: Saw pod success
Dec 18 12:53:47.269: INFO: Pod "downwardapi-volume-68217f35-2195-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 12:53:47.471: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-68217f35-2195-11ea-ad77-0242ac110004 container client-container: 
STEP: delete the pod
Dec 18 12:53:47.786: INFO: Waiting for pod downwardapi-volume-68217f35-2195-11ea-ad77-0242ac110004 to disappear
Dec 18 12:53:47.904: INFO: Pod downwardapi-volume-68217f35-2195-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:53:47.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-lkrsw" for this suite.
Dec 18 12:53:53.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:53:53.994: INFO: namespace: e2e-tests-downward-api-lkrsw, resource: bindings, ignored listing per whitelist
Dec 18 12:53:54.106: INFO: namespace e2e-tests-downward-api-lkrsw deletion completed in 6.187663653s

• [SLOW TEST:17.449 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:53:54.107: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W1218 12:54:25.903515       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 18 12:54:25.903: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:54:25.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-t5hfw" for this suite.
Dec 18 12:54:36.263: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:54:36.598: INFO: namespace: e2e-tests-gc-t5hfw, resource: bindings, ignored listing per whitelist
Dec 18 12:54:36.895: INFO: namespace e2e-tests-gc-t5hfw deletion completed in 10.984422489s

• [SLOW TEST:42.789 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:54:36.896: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:55:55.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-6n6kf" for this suite.
Dec 18 12:56:03.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:56:03.341: INFO: namespace: e2e-tests-container-runtime-6n6kf, resource: bindings, ignored listing per whitelist
Dec 18 12:56:03.372: INFO: namespace e2e-tests-container-runtime-6n6kf deletion completed in 8.35049308s

• [SLOW TEST:86.476 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:56:03.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-48qxz in namespace e2e-tests-proxy-qnw2l
I1218 12:56:03.832102       8 runners.go:184] Created replication controller with name: proxy-service-48qxz, namespace: e2e-tests-proxy-qnw2l, replica count: 1
I1218 12:56:04.884071       8 runners.go:184] proxy-service-48qxz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1218 12:56:05.886542       8 runners.go:184] proxy-service-48qxz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1218 12:56:06.887384       8 runners.go:184] proxy-service-48qxz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1218 12:56:07.888440       8 runners.go:184] proxy-service-48qxz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1218 12:56:08.889599       8 runners.go:184] proxy-service-48qxz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1218 12:56:09.890202       8 runners.go:184] proxy-service-48qxz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1218 12:56:10.891244       8 runners.go:184] proxy-service-48qxz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1218 12:56:11.892285       8 runners.go:184] proxy-service-48qxz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1218 12:56:12.893308       8 runners.go:184] proxy-service-48qxz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1218 12:56:13.894719       8 runners.go:184] proxy-service-48qxz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1218 12:56:14.896425       8 runners.go:184] proxy-service-48qxz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1218 12:56:15.897316       8 runners.go:184] proxy-service-48qxz Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Dec 18 12:56:15.917: INFO: setup took 12.26031725s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Dec 18 12:56:15.986: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-qnw2l/pods/proxy-service-48qxz-5n6g2:162/proxy/: bar (200; 68.392297ms)
Dec 18 12:56:15.987: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-qnw2l/pods/proxy-service-48qxz-5n6g2:160/proxy/: foo (200; 69.4344ms)
Dec 18 12:56:15.993: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-qnw2l/services/http:proxy-service-48qxz:portname2/proxy/: bar (200; 74.900652ms)
Dec 18 12:56:16.001: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-qnw2l/pods/proxy-service-48qxz-5n6g2:1080/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 18 12:56:41.487: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 18.405239ms)
Dec 18 12:56:41.566: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 79.608899ms)
Dec 18 12:56:41.582: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.065108ms)
Dec 18 12:56:41.592: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.725978ms)
Dec 18 12:56:41.605: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.204507ms)
Dec 18 12:56:41.613: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.926885ms)
Dec 18 12:56:41.620: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.547903ms)
Dec 18 12:56:41.628: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.496317ms)
Dec 18 12:56:41.636: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.138508ms)
Dec 18 12:56:41.650: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.028377ms)
Dec 18 12:56:41.662: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.32554ms)
Dec 18 12:56:41.673: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.2515ms)
Dec 18 12:56:41.684: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.813267ms)
Dec 18 12:56:41.697: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.002488ms)
Dec 18 12:56:41.708: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.575162ms)
Dec 18 12:56:41.720: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.64991ms)
Dec 18 12:56:41.728: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.429584ms)
Dec 18 12:56:41.733: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.325903ms)
Dec 18 12:56:41.738: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.018786ms)
Dec 18 12:56:41.746: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.259906ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:56:41.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-crh2f" for this suite.
Dec 18 12:56:47.796: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:56:47.956: INFO: namespace: e2e-tests-proxy-crh2f, resource: bindings, ignored listing per whitelist
Dec 18 12:56:48.155: INFO: namespace e2e-tests-proxy-crh2f deletion completed in 6.40188532s

• [SLOW TEST:6.935 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:56:48.156: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-da819ff2-2195-11ea-ad77-0242ac110004
STEP: Creating a pod to test consume configMaps
Dec 18 12:56:48.818: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-da82ea61-2195-11ea-ad77-0242ac110004" in namespace "e2e-tests-projected-6l2dw" to be "success or failure"
Dec 18 12:56:48.836: INFO: Pod "pod-projected-configmaps-da82ea61-2195-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 18.095892ms
Dec 18 12:56:51.990: INFO: Pod "pod-projected-configmaps-da82ea61-2195-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 3.171315412s
Dec 18 12:56:54.014: INFO: Pod "pod-projected-configmaps-da82ea61-2195-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 5.195322957s
Dec 18 12:56:56.027: INFO: Pod "pod-projected-configmaps-da82ea61-2195-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.209219498s
Dec 18 12:56:58.079: INFO: Pod "pod-projected-configmaps-da82ea61-2195-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.260912849s
Dec 18 12:57:00.532: INFO: Pod "pod-projected-configmaps-da82ea61-2195-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.713656246s
Dec 18 12:57:02.564: INFO: Pod "pod-projected-configmaps-da82ea61-2195-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 13.7455607s
Dec 18 12:57:04.582: INFO: Pod "pod-projected-configmaps-da82ea61-2195-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 15.763635575s
Dec 18 12:57:06.826: INFO: Pod "pod-projected-configmaps-da82ea61-2195-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 18.007625928s
Dec 18 12:57:08.939: INFO: Pod "pod-projected-configmaps-da82ea61-2195-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.120724323s
STEP: Saw pod success
Dec 18 12:57:08.939: INFO: Pod "pod-projected-configmaps-da82ea61-2195-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 12:57:09.131: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-da82ea61-2195-11ea-ad77-0242ac110004 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 18 12:57:09.446: INFO: Waiting for pod pod-projected-configmaps-da82ea61-2195-11ea-ad77-0242ac110004 to disappear
Dec 18 12:57:09.453: INFO: Pod pod-projected-configmaps-da82ea61-2195-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:57:09.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-6l2dw" for this suite.
Dec 18 12:57:15.538: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:57:15.787: INFO: namespace: e2e-tests-projected-6l2dw, resource: bindings, ignored listing per whitelist
Dec 18 12:57:15.970: INFO: namespace e2e-tests-projected-6l2dw deletion completed in 6.49674898s

• [SLOW TEST:27.814 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:57:15.970: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:57:26.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-b9rzs" for this suite.
Dec 18 12:58:13.112: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:58:13.485: INFO: namespace: e2e-tests-kubelet-test-b9rzs, resource: bindings, ignored listing per whitelist
Dec 18 12:58:13.534: INFO: namespace e2e-tests-kubelet-test-b9rzs deletion completed in 46.688785242s

• [SLOW TEST:57.563 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:58:13.534: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-z9qb2
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 18 12:58:13.840: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 18 12:58:54.302: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-z9qb2 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 18 12:58:54.302: INFO: >>> kubeConfig: /root/.kube/config
Dec 18 12:58:56.108: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 12:58:56.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-z9qb2" for this suite.
Dec 18 12:59:20.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 12:59:20.686: INFO: namespace: e2e-tests-pod-network-test-z9qb2, resource: bindings, ignored listing per whitelist
Dec 18 12:59:20.695: INFO: namespace e2e-tests-pod-network-test-z9qb2 deletion completed in 24.558688156s

• [SLOW TEST:67.161 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 12:59:20.695: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-62dmc
Dec 18 12:59:33.305: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-62dmc
STEP: checking the pod's current state and verifying that restartCount is present
Dec 18 12:59:33.309: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 13:03:35.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-62dmc" for this suite.
Dec 18 13:03:41.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:03:41.598: INFO: namespace: e2e-tests-container-probe-62dmc, resource: bindings, ignored listing per whitelist
Dec 18 13:03:41.607: INFO: namespace e2e-tests-container-probe-62dmc deletion completed in 6.247508329s

• [SLOW TEST:260.912 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 13:03:41.607: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-g2mc
STEP: Creating a pod to test atomic-volume-subpath
Dec 18 13:03:41.844: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-g2mc" in namespace "e2e-tests-subpath-b6hk5" to be "success or failure"
Dec 18 13:03:41.895: INFO: Pod "pod-subpath-test-downwardapi-g2mc": Phase="Pending", Reason="", readiness=false. Elapsed: 50.735751ms
Dec 18 13:03:44.273: INFO: Pod "pod-subpath-test-downwardapi-g2mc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.429001508s
Dec 18 13:03:46.286: INFO: Pod "pod-subpath-test-downwardapi-g2mc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.442509631s
Dec 18 13:03:48.503: INFO: Pod "pod-subpath-test-downwardapi-g2mc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.659057251s
Dec 18 13:03:50.602: INFO: Pod "pod-subpath-test-downwardapi-g2mc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.758166152s
Dec 18 13:03:52.664: INFO: Pod "pod-subpath-test-downwardapi-g2mc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.820170421s
Dec 18 13:03:55.587: INFO: Pod "pod-subpath-test-downwardapi-g2mc": Phase="Pending", Reason="", readiness=false. Elapsed: 13.743205669s
Dec 18 13:03:57.689: INFO: Pod "pod-subpath-test-downwardapi-g2mc": Phase="Pending", Reason="", readiness=false. Elapsed: 15.845082857s
Dec 18 13:03:59.715: INFO: Pod "pod-subpath-test-downwardapi-g2mc": Phase="Pending", Reason="", readiness=false. Elapsed: 17.87079606s
Dec 18 13:04:01.730: INFO: Pod "pod-subpath-test-downwardapi-g2mc": Phase="Running", Reason="", readiness=false. Elapsed: 19.886074401s
Dec 18 13:04:03.777: INFO: Pod "pod-subpath-test-downwardapi-g2mc": Phase="Running", Reason="", readiness=false. Elapsed: 21.933158697s
Dec 18 13:04:05.798: INFO: Pod "pod-subpath-test-downwardapi-g2mc": Phase="Running", Reason="", readiness=false. Elapsed: 23.954409973s
Dec 18 13:04:07.825: INFO: Pod "pod-subpath-test-downwardapi-g2mc": Phase="Running", Reason="", readiness=false. Elapsed: 25.981051039s
Dec 18 13:04:09.856: INFO: Pod "pod-subpath-test-downwardapi-g2mc": Phase="Running", Reason="", readiness=false. Elapsed: 28.011631846s
Dec 18 13:04:11.889: INFO: Pod "pod-subpath-test-downwardapi-g2mc": Phase="Running", Reason="", readiness=false. Elapsed: 30.044555509s
Dec 18 13:04:13.908: INFO: Pod "pod-subpath-test-downwardapi-g2mc": Phase="Running", Reason="", readiness=false. Elapsed: 32.064153108s
Dec 18 13:04:15.928: INFO: Pod "pod-subpath-test-downwardapi-g2mc": Phase="Running", Reason="", readiness=false. Elapsed: 34.083690331s
Dec 18 13:04:17.948: INFO: Pod "pod-subpath-test-downwardapi-g2mc": Phase="Running", Reason="", readiness=false. Elapsed: 36.103988148s
Dec 18 13:04:19.969: INFO: Pod "pod-subpath-test-downwardapi-g2mc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.125402955s
STEP: Saw pod success
Dec 18 13:04:19.970: INFO: Pod "pod-subpath-test-downwardapi-g2mc" satisfied condition "success or failure"
Dec 18 13:04:19.975: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-g2mc container test-container-subpath-downwardapi-g2mc: 
STEP: delete the pod
Dec 18 13:04:20.296: INFO: Waiting for pod pod-subpath-test-downwardapi-g2mc to disappear
Dec 18 13:04:20.322: INFO: Pod pod-subpath-test-downwardapi-g2mc no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-g2mc
Dec 18 13:04:20.322: INFO: Deleting pod "pod-subpath-test-downwardapi-g2mc" in namespace "e2e-tests-subpath-b6hk5"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 13:04:20.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-b6hk5" for this suite.
Dec 18 13:04:28.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:04:28.596: INFO: namespace: e2e-tests-subpath-b6hk5, resource: bindings, ignored listing per whitelist
Dec 18 13:04:30.730: INFO: namespace e2e-tests-subpath-b6hk5 deletion completed in 10.354723159s

• [SLOW TEST:49.123 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 13:04:30.731: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-edf3813c-2196-11ea-ad77-0242ac110004
STEP: Creating a pod to test consume configMaps
Dec 18 13:04:30.995: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-edfb908f-2196-11ea-ad77-0242ac110004" in namespace "e2e-tests-projected-p48fh" to be "success or failure"
Dec 18 13:04:31.033: INFO: Pod "pod-projected-configmaps-edfb908f-2196-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 37.994823ms
Dec 18 13:04:33.462: INFO: Pod "pod-projected-configmaps-edfb908f-2196-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.466883728s
Dec 18 13:04:35.486: INFO: Pod "pod-projected-configmaps-edfb908f-2196-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.490959624s
Dec 18 13:04:37.501: INFO: Pod "pod-projected-configmaps-edfb908f-2196-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.505869516s
Dec 18 13:04:40.264: INFO: Pod "pod-projected-configmaps-edfb908f-2196-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.269051174s
Dec 18 13:04:43.115: INFO: Pod "pod-projected-configmaps-edfb908f-2196-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 12.120052116s
Dec 18 13:04:45.131: INFO: Pod "pod-projected-configmaps-edfb908f-2196-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.136299636s
STEP: Saw pod success
Dec 18 13:04:45.132: INFO: Pod "pod-projected-configmaps-edfb908f-2196-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 13:04:45.138: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-edfb908f-2196-11ea-ad77-0242ac110004 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 18 13:04:45.964: INFO: Waiting for pod pod-projected-configmaps-edfb908f-2196-11ea-ad77-0242ac110004 to disappear
Dec 18 13:04:45.982: INFO: Pod pod-projected-configmaps-edfb908f-2196-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 13:04:45.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-p48fh" for this suite.
Dec 18 13:04:54.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:04:54.285: INFO: namespace: e2e-tests-projected-p48fh, resource: bindings, ignored listing per whitelist
Dec 18 13:04:54.533: INFO: namespace e2e-tests-projected-p48fh deletion completed in 8.40556423s

• [SLOW TEST:23.803 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 13:04:54.534: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 13:05:07.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-lh7tp" for this suite.
Dec 18 13:05:15.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:05:15.407: INFO: namespace: e2e-tests-emptydir-wrapper-lh7tp, resource: bindings, ignored listing per whitelist
Dec 18 13:05:15.460: INFO: namespace e2e-tests-emptydir-wrapper-lh7tp deletion completed in 8.229786708s

• [SLOW TEST:20.926 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 13:05:15.460: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override command
Dec 18 13:05:15.798: INFO: Waiting up to 5m0s for pod "client-containers-08995741-2197-11ea-ad77-0242ac110004" in namespace "e2e-tests-containers-4bgjg" to be "success or failure"
Dec 18 13:05:15.821: INFO: Pod "client-containers-08995741-2197-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 22.486483ms
Dec 18 13:05:17.837: INFO: Pod "client-containers-08995741-2197-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038549969s
Dec 18 13:05:19.865: INFO: Pod "client-containers-08995741-2197-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066707787s
Dec 18 13:05:22.296: INFO: Pod "client-containers-08995741-2197-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.496970199s
Dec 18 13:05:24.306: INFO: Pod "client-containers-08995741-2197-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.506851865s
Dec 18 13:05:26.330: INFO: Pod "client-containers-08995741-2197-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.531667089s
STEP: Saw pod success
Dec 18 13:05:26.331: INFO: Pod "client-containers-08995741-2197-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 13:05:26.338: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-08995741-2197-11ea-ad77-0242ac110004 container test-container: 
STEP: delete the pod
Dec 18 13:05:27.102: INFO: Waiting for pod client-containers-08995741-2197-11ea-ad77-0242ac110004 to disappear
Dec 18 13:05:27.124: INFO: Pod client-containers-08995741-2197-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 13:05:27.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-4bgjg" for this suite.
Dec 18 13:05:33.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:05:33.324: INFO: namespace: e2e-tests-containers-4bgjg, resource: bindings, ignored listing per whitelist
Dec 18 13:05:33.447: INFO: namespace e2e-tests-containers-4bgjg deletion completed in 6.307780013s

• [SLOW TEST:17.987 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 13:05:33.448: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Dec 18 13:05:33.655: INFO: Waiting up to 5m0s for pod "pod-134fa9b7-2197-11ea-ad77-0242ac110004" in namespace "e2e-tests-emptydir-qh2zr" to be "success or failure"
Dec 18 13:05:33.676: INFO: Pod "pod-134fa9b7-2197-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 20.655225ms
Dec 18 13:05:35.690: INFO: Pod "pod-134fa9b7-2197-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035064087s
Dec 18 13:05:37.712: INFO: Pod "pod-134fa9b7-2197-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056745951s
Dec 18 13:05:40.653: INFO: Pod "pod-134fa9b7-2197-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.99782982s
Dec 18 13:05:42.715: INFO: Pod "pod-134fa9b7-2197-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.059689376s
Dec 18 13:05:44.761: INFO: Pod "pod-134fa9b7-2197-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.106038079s
STEP: Saw pod success
Dec 18 13:05:44.761: INFO: Pod "pod-134fa9b7-2197-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 13:05:44.796: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-134fa9b7-2197-11ea-ad77-0242ac110004 container test-container: 
STEP: delete the pod
Dec 18 13:05:44.927: INFO: Waiting for pod pod-134fa9b7-2197-11ea-ad77-0242ac110004 to disappear
Dec 18 13:05:44.932: INFO: Pod pod-134fa9b7-2197-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 13:05:44.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-qh2zr" for this suite.
Dec 18 13:05:51.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:05:51.249: INFO: namespace: e2e-tests-emptydir-qh2zr, resource: bindings, ignored listing per whitelist
Dec 18 13:05:51.290: INFO: namespace e2e-tests-emptydir-qh2zr deletion completed in 6.35267058s

• [SLOW TEST:17.842 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 13:05:51.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 18 13:05:51.549: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1e007796-2197-11ea-ad77-0242ac110004" in namespace "e2e-tests-projected-c5f9q" to be "success or failure"
Dec 18 13:05:51.623: INFO: Pod "downwardapi-volume-1e007796-2197-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 74.847226ms
Dec 18 13:05:53.648: INFO: Pod "downwardapi-volume-1e007796-2197-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099690695s
Dec 18 13:05:55.683: INFO: Pod "downwardapi-volume-1e007796-2197-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134051496s
Dec 18 13:05:58.265: INFO: Pod "downwardapi-volume-1e007796-2197-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.71600022s
Dec 18 13:06:00.282: INFO: Pod "downwardapi-volume-1e007796-2197-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.733792696s
Dec 18 13:06:02.317: INFO: Pod "downwardapi-volume-1e007796-2197-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.768202621s
STEP: Saw pod success
Dec 18 13:06:02.317: INFO: Pod "downwardapi-volume-1e007796-2197-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 13:06:02.349: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-1e007796-2197-11ea-ad77-0242ac110004 container client-container: 
STEP: delete the pod
Dec 18 13:06:02.584: INFO: Waiting for pod downwardapi-volume-1e007796-2197-11ea-ad77-0242ac110004 to disappear
Dec 18 13:06:02.602: INFO: Pod downwardapi-volume-1e007796-2197-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 13:06:02.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-c5f9q" for this suite.
Dec 18 13:06:08.724: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:06:08.802: INFO: namespace: e2e-tests-projected-c5f9q, resource: bindings, ignored listing per whitelist
Dec 18 13:06:08.818: INFO: namespace e2e-tests-projected-c5f9q deletion completed in 6.206215395s

• [SLOW TEST:17.528 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 13:06:08.818: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Dec 18 13:06:22.119: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 13:06:24.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-bt7q6" for this suite.
Dec 18 13:07:06.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:07:07.223: INFO: namespace: e2e-tests-replicaset-bt7q6, resource: bindings, ignored listing per whitelist
Dec 18 13:07:07.227: INFO: namespace e2e-tests-replicaset-bt7q6 deletion completed in 42.952515125s

• [SLOW TEST:58.408 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 13:07:07.228: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-4b3f7915-2197-11ea-ad77-0242ac110004
STEP: Creating secret with name s-test-opt-upd-4b3f79d1-2197-11ea-ad77-0242ac110004
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-4b3f7915-2197-11ea-ad77-0242ac110004
STEP: Updating secret s-test-opt-upd-4b3f79d1-2197-11ea-ad77-0242ac110004
STEP: Creating secret with name s-test-opt-create-4b3f79fd-2197-11ea-ad77-0242ac110004
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 13:08:39.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-jzr7q" for this suite.
Dec 18 13:09:06.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:09:06.125: INFO: namespace: e2e-tests-projected-jzr7q, resource: bindings, ignored listing per whitelist
Dec 18 13:09:06.191: INFO: namespace e2e-tests-projected-jzr7q deletion completed in 26.359559746s

• [SLOW TEST:118.964 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 13:09:06.192: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-8g5bb
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Dec 18 13:09:06.400: INFO: Found 0 stateful pods, waiting for 3
Dec 18 13:09:16.986: INFO: Found 1 stateful pods, waiting for 3
Dec 18 13:09:27.017: INFO: Found 2 stateful pods, waiting for 3
Dec 18 13:09:36.545: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 18 13:09:36.546: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 18 13:09:36.546: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 18 13:09:46.417: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 18 13:09:46.417: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 18 13:09:46.417: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Dec 18 13:09:46.477: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Dec 18 13:09:56.700: INFO: Updating stateful set ss2
Dec 18 13:09:56.920: INFO: Waiting for Pod e2e-tests-statefulset-8g5bb/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Dec 18 13:10:07.343: INFO: Found 2 stateful pods, waiting for 3
Dec 18 13:10:18.073: INFO: Found 2 stateful pods, waiting for 3
Dec 18 13:10:27.362: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 18 13:10:27.362: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 18 13:10:27.362: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 18 13:10:37.664: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 18 13:10:37.664: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 18 13:10:37.664: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Dec 18 13:10:37.734: INFO: Updating stateful set ss2
Dec 18 13:10:37.843: INFO: Waiting for Pod e2e-tests-statefulset-8g5bb/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 18 13:10:48.048: INFO: Updating stateful set ss2
Dec 18 13:10:48.168: INFO: Waiting for StatefulSet e2e-tests-statefulset-8g5bb/ss2 to complete update
Dec 18 13:10:48.168: INFO: Waiting for Pod e2e-tests-statefulset-8g5bb/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 18 13:10:58.648: INFO: Waiting for StatefulSet e2e-tests-statefulset-8g5bb/ss2 to complete update
Dec 18 13:10:58.649: INFO: Waiting for Pod e2e-tests-statefulset-8g5bb/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 18 13:11:08.220: INFO: Waiting for StatefulSet e2e-tests-statefulset-8g5bb/ss2 to complete update
Dec 18 13:11:08.220: INFO: Waiting for Pod e2e-tests-statefulset-8g5bb/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 18 13:11:18.829: INFO: Waiting for StatefulSet e2e-tests-statefulset-8g5bb/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 18 13:11:28.264: INFO: Deleting all statefulset in ns e2e-tests-statefulset-8g5bb
Dec 18 13:11:28.270: INFO: Scaling statefulset ss2 to 0
Dec 18 13:11:58.321: INFO: Waiting for statefulset status.replicas updated to 0
Dec 18 13:11:58.328: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 13:11:58.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-8g5bb" for this suite.
Dec 18 13:12:06.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:12:06.590: INFO: namespace: e2e-tests-statefulset-8g5bb, resource: bindings, ignored listing per whitelist
Dec 18 13:12:06.649: INFO: namespace e2e-tests-statefulset-8g5bb deletion completed in 8.239815445s

• [SLOW TEST:180.457 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 13:12:06.649: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-fdc42a6f-2197-11ea-ad77-0242ac110004
STEP: Creating a pod to test consume configMaps
Dec 18 13:12:06.969: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fdc5265a-2197-11ea-ad77-0242ac110004" in namespace "e2e-tests-projected-t5wjl" to be "success or failure"
Dec 18 13:12:07.043: INFO: Pod "pod-projected-configmaps-fdc5265a-2197-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 73.438527ms
Dec 18 13:12:09.055: INFO: Pod "pod-projected-configmaps-fdc5265a-2197-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085766305s
Dec 18 13:12:11.095: INFO: Pod "pod-projected-configmaps-fdc5265a-2197-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.125270201s
Dec 18 13:12:13.106: INFO: Pod "pod-projected-configmaps-fdc5265a-2197-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.136942064s
Dec 18 13:12:15.643: INFO: Pod "pod-projected-configmaps-fdc5265a-2197-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.673339081s
Dec 18 13:12:17.669: INFO: Pod "pod-projected-configmaps-fdc5265a-2197-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.699288297s
Dec 18 13:12:19.707: INFO: Pod "pod-projected-configmaps-fdc5265a-2197-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.737036859s
STEP: Saw pod success
Dec 18 13:12:19.707: INFO: Pod "pod-projected-configmaps-fdc5265a-2197-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 13:12:19.732: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-fdc5265a-2197-11ea-ad77-0242ac110004 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 18 13:12:20.038: INFO: Waiting for pod pod-projected-configmaps-fdc5265a-2197-11ea-ad77-0242ac110004 to disappear
Dec 18 13:12:20.054: INFO: Pod pod-projected-configmaps-fdc5265a-2197-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 13:12:20.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-t5wjl" for this suite.
Dec 18 13:12:26.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:12:26.596: INFO: namespace: e2e-tests-projected-t5wjl, resource: bindings, ignored listing per whitelist
Dec 18 13:12:26.624: INFO: namespace e2e-tests-projected-t5wjl deletion completed in 6.559517185s

• [SLOW TEST:19.976 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 13:12:26.626: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Dec 18 13:12:26.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-lczkb'
Dec 18 13:12:28.982: INFO: stderr: ""
Dec 18 13:12:28.982: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 18 13:12:30.001: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 13:12:30.001: INFO: Found 0 / 1
Dec 18 13:12:31.005: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 13:12:31.005: INFO: Found 0 / 1
Dec 18 13:12:32.000: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 13:12:32.001: INFO: Found 0 / 1
Dec 18 13:12:33.002: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 13:12:33.002: INFO: Found 0 / 1
Dec 18 13:12:33.999: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 13:12:33.999: INFO: Found 0 / 1
Dec 18 13:12:36.912: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 13:12:36.913: INFO: Found 0 / 1
Dec 18 13:12:37.460: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 13:12:37.461: INFO: Found 0 / 1
Dec 18 13:12:39.022: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 13:12:39.022: INFO: Found 0 / 1
Dec 18 13:12:40.006: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 13:12:40.006: INFO: Found 0 / 1
Dec 18 13:12:40.997: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 13:12:40.998: INFO: Found 0 / 1
Dec 18 13:12:42.103: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 13:12:42.104: INFO: Found 1 / 1
Dec 18 13:12:42.104: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Dec 18 13:12:42.135: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 13:12:42.135: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 18 13:12:42.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-95l46 --namespace=e2e-tests-kubectl-lczkb -p {"metadata":{"annotations":{"x":"y"}}}'
Dec 18 13:12:42.387: INFO: stderr: ""
Dec 18 13:12:42.387: INFO: stdout: "pod/redis-master-95l46 patched\n"
STEP: checking annotations
Dec 18 13:12:42.412: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 13:12:42.413: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 13:12:42.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-lczkb" for this suite.
Dec 18 13:13:08.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:13:08.916: INFO: namespace: e2e-tests-kubectl-lczkb, resource: bindings, ignored listing per whitelist
Dec 18 13:13:08.950: INFO: namespace e2e-tests-kubectl-lczkb deletion completed in 26.384704634s

• [SLOW TEST:42.325 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 13:13:08.951: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Dec 18 13:13:09.165: INFO: Waiting up to 5m0s for pod "pod-22cf0023-2198-11ea-ad77-0242ac110004" in namespace "e2e-tests-emptydir-7kwfz" to be "success or failure"
Dec 18 13:13:09.192: INFO: Pod "pod-22cf0023-2198-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 26.417925ms
Dec 18 13:13:11.205: INFO: Pod "pod-22cf0023-2198-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040039354s
Dec 18 13:13:13.235: INFO: Pod "pod-22cf0023-2198-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069885153s
Dec 18 13:13:15.394: INFO: Pod "pod-22cf0023-2198-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.2287341s
Dec 18 13:13:17.411: INFO: Pod "pod-22cf0023-2198-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.245160946s
Dec 18 13:13:19.495: INFO: Pod "pod-22cf0023-2198-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.329840869s
STEP: Saw pod success
Dec 18 13:13:19.496: INFO: Pod "pod-22cf0023-2198-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 13:13:19.508: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-22cf0023-2198-11ea-ad77-0242ac110004 container test-container: 
STEP: delete the pod
Dec 18 13:13:19.661: INFO: Waiting for pod pod-22cf0023-2198-11ea-ad77-0242ac110004 to disappear
Dec 18 13:13:19.677: INFO: Pod pod-22cf0023-2198-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 13:13:19.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-7kwfz" for this suite.
Dec 18 13:13:25.746: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:13:25.920: INFO: namespace: e2e-tests-emptydir-7kwfz, resource: bindings, ignored listing per whitelist
Dec 18 13:13:25.941: INFO: namespace e2e-tests-emptydir-7kwfz deletion completed in 6.2485894s

• [SLOW TEST:16.990 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 13:13:25.942: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-xb4nk
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 18 13:13:26.152: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 18 13:14:02.550: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-xb4nk PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 18 13:14:02.551: INFO: >>> kubeConfig: /root/.kube/config
Dec 18 13:14:03.052: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 13:14:03.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-xb4nk" for this suite.
Dec 18 13:14:27.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:14:27.261: INFO: namespace: e2e-tests-pod-network-test-xb4nk, resource: bindings, ignored listing per whitelist
Dec 18 13:14:27.328: INFO: namespace e2e-tests-pod-network-test-xb4nk deletion completed in 24.256994611s

• [SLOW TEST:61.386 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 13:14:27.328: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 18 13:14:27.689: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Dec 18 13:14:27.741: INFO: Number of nodes with available pods: 0
Dec 18 13:14:27.741: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 18 13:14:29.549: INFO: Number of nodes with available pods: 0
Dec 18 13:14:29.549: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 18 13:14:29.766: INFO: Number of nodes with available pods: 0
Dec 18 13:14:29.766: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 18 13:14:30.770: INFO: Number of nodes with available pods: 0
Dec 18 13:14:30.770: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 18 13:14:31.762: INFO: Number of nodes with available pods: 0
Dec 18 13:14:31.762: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 18 13:14:32.768: INFO: Number of nodes with available pods: 0
Dec 18 13:14:32.768: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 18 13:14:34.395: INFO: Number of nodes with available pods: 0
Dec 18 13:14:34.395: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 18 13:14:35.211: INFO: Number of nodes with available pods: 0
Dec 18 13:14:35.212: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 18 13:14:36.552: INFO: Number of nodes with available pods: 0
Dec 18 13:14:36.552: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 18 13:14:36.806: INFO: Number of nodes with available pods: 0
Dec 18 13:14:36.806: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 18 13:14:37.773: INFO: Number of nodes with available pods: 0
Dec 18 13:14:37.773: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 18 13:14:38.769: INFO: Number of nodes with available pods: 0
Dec 18 13:14:38.769: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 18 13:14:39.794: INFO: Number of nodes with available pods: 1
Dec 18 13:14:39.794: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Dec 18 13:14:40.133: INFO: Wrong image for pod: daemon-set-dtlrm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 13:14:41.366: INFO: Wrong image for pod: daemon-set-dtlrm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 13:14:42.394: INFO: Wrong image for pod: daemon-set-dtlrm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 13:14:44.169: INFO: Wrong image for pod: daemon-set-dtlrm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 13:14:44.397: INFO: Wrong image for pod: daemon-set-dtlrm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 13:14:45.455: INFO: Wrong image for pod: daemon-set-dtlrm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 13:14:46.366: INFO: Wrong image for pod: daemon-set-dtlrm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 13:14:46.366: INFO: Pod daemon-set-dtlrm is not available
Dec 18 13:14:49.237: INFO: Pod daemon-set-wskng is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Dec 18 13:14:49.909: INFO: Number of nodes with available pods: 0
Dec 18 13:14:49.909: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 18 13:14:50.933: INFO: Number of nodes with available pods: 0
Dec 18 13:14:50.933: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 18 13:14:51.931: INFO: Number of nodes with available pods: 0
Dec 18 13:14:51.931: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 18 13:14:52.925: INFO: Number of nodes with available pods: 0
Dec 18 13:14:52.925: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 18 13:14:54.617: INFO: Number of nodes with available pods: 0
Dec 18 13:14:54.617: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 18 13:14:54.996: INFO: Number of nodes with available pods: 0
Dec 18 13:14:54.996: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 18 13:14:56.015: INFO: Number of nodes with available pods: 0
Dec 18 13:14:56.015: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 18 13:14:56.992: INFO: Number of nodes with available pods: 1
Dec 18 13:14:56.992: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-lslrs, will wait for the garbage collector to delete the pods
Dec 18 13:14:57.084: INFO: Deleting DaemonSet.extensions daemon-set took: 15.38466ms
Dec 18 13:14:57.185: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.690851ms
Dec 18 13:15:04.900: INFO: Number of nodes with available pods: 0
Dec 18 13:15:04.900: INFO: Number of running nodes: 0, number of available pods: 0
Dec 18 13:15:04.908: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-lslrs/daemonsets","resourceVersion":"15240813"},"items":null}

Dec 18 13:15:04.914: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-lslrs/pods","resourceVersion":"15240813"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 13:15:04.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-lslrs" for this suite.
Dec 18 13:15:11.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:15:11.301: INFO: namespace: e2e-tests-daemonsets-lslrs, resource: bindings, ignored listing per whitelist
Dec 18 13:15:11.338: INFO: namespace e2e-tests-daemonsets-lslrs deletion completed in 6.358725972s

• [SLOW TEST:44.010 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 13:15:11.339: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W1218 13:15:22.604438       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 18 13:15:22.604: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 13:15:22.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-9txqj" for this suite.
Dec 18 13:15:28.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:15:28.754: INFO: namespace: e2e-tests-gc-9txqj, resource: bindings, ignored listing per whitelist
Dec 18 13:15:28.826: INFO: namespace e2e-tests-gc-9txqj deletion completed in 6.212947829s

• [SLOW TEST:17.488 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 13:15:28.827: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-z42bp
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-z42bp to expose endpoints map[]
Dec 18 13:15:29.131: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-z42bp exposes endpoints map[] (11.743879ms elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-z42bp
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-z42bp to expose endpoints map[pod1:[100]]
Dec 18 13:15:33.388: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.158663743s elapsed, will retry)
Dec 18 13:15:39.944: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-z42bp exposes endpoints map[pod1:[100]] (10.714659726s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-z42bp
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-z42bp to expose endpoints map[pod1:[100] pod2:[101]]
Dec 18 13:15:45.206: INFO: Unexpected endpoints: found map[764842ed-2198-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (5.244581159s elapsed, will retry)
Dec 18 13:15:49.832: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-z42bp exposes endpoints map[pod1:[100] pod2:[101]] (9.87062327s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-z42bp
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-z42bp to expose endpoints map[pod2:[101]]
Dec 18 13:15:52.212: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-z42bp exposes endpoints map[pod2:[101]] (2.365010836s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-z42bp
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-z42bp to expose endpoints map[]
Dec 18 13:15:52.773: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-z42bp exposes endpoints map[] (112.251862ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 13:15:52.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-z42bp" for this suite.
Dec 18 13:16:17.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:16:17.399: INFO: namespace: e2e-tests-services-z42bp, resource: bindings, ignored listing per whitelist
Dec 18 13:16:17.501: INFO: namespace e2e-tests-services-z42bp deletion completed in 24.483684652s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:48.674 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 13:16:17.501: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 18 13:16:17.712: INFO: Pod name rollover-pod: Found 0 pods out of 1
Dec 18 13:16:22.737: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 18 13:16:28.812: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Dec 18 13:16:30.825: INFO: Creating deployment "test-rollover-deployment"
Dec 18 13:16:30.909: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Dec 18 13:16:33.159: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Dec 18 13:16:33.584: INFO: Ensure that both replica sets have 1 created replica
Dec 18 13:16:33.618: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Dec 18 13:16:33.641: INFO: Updating deployment test-rollover-deployment
Dec 18 13:16:33.641: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Dec 18 13:16:36.076: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Dec 18 13:16:36.388: INFO: Make sure deployment "test-rollover-deployment" is complete
Dec 18 13:16:36.404: INFO: all replica sets need to contain the pod-template-hash label
Dec 18 13:16:36.404: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712271791, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712271791, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712271796, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712271790, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 18 13:16:38.629: INFO: all replica sets need to contain the pod-template-hash label
Dec 18 13:16:38.630: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712271791, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712271791, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712271796, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712271790, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 18 13:16:40.428: INFO: all replica sets need to contain the pod-template-hash label
Dec 18 13:16:40.428: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712271791, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712271791, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712271796, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712271790, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 18 13:16:43.179: INFO: all replica sets need to contain the pod-template-hash label
Dec 18 13:16:43.179: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712271791, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712271791, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712271796, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712271790, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 18 13:16:45.419: INFO: all replica sets need to contain the pod-template-hash label
Dec 18 13:16:45.419: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712271791, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712271791, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712271796, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712271790, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 18 13:16:46.423: INFO: all replica sets need to contain the pod-template-hash label
Dec 18 13:16:46.423: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712271791, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712271791, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712271796, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712271790, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 18 13:16:48.432: INFO: all replica sets need to contain the pod-template-hash label
Dec 18 13:16:48.432: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712271791, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712271791, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712271807, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712271790, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 18 13:16:50.428: INFO: all replica sets need to contain the pod-template-hash label
Dec 18 13:16:50.428: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712271791, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712271791, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712271807, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712271790, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 18 13:16:52.533: INFO: all replica sets need to contain the pod-template-hash label
Dec 18 13:16:52.534: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712271791, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712271791, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712271807, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712271790, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 18 13:16:54.437: INFO: all replica sets need to contain the pod-template-hash label
Dec 18 13:16:54.437: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712271791, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712271791, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712271807, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712271790, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 18 13:16:56.453: INFO: all replica sets need to contain the pod-template-hash label
Dec 18 13:16:56.454: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712271791, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712271791, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712271807, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712271790, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 18 13:16:58.470: INFO: 
Dec 18 13:16:58.471: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 18 13:16:59.377: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-g9882,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-g9882/deployments/test-rollover-deployment,UID:9b0cf068-2198-11ea-a994-fa163e34d433,ResourceVersion:15241125,Generation:2,CreationTimestamp:2019-12-18 13:16:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-18 13:16:31 +0000 UTC 2019-12-18 13:16:31 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-18 13:16:57 +0000 UTC 2019-12-18 13:16:30 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Dec 18 13:16:59.385: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-g9882,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-g9882/replicasets/test-rollover-deployment-5b8479fdb6,UID:9cbbf6f5-2198-11ea-a994-fa163e34d433,ResourceVersion:15241115,Generation:2,CreationTimestamp:2019-12-18 13:16:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 9b0cf068-2198-11ea-a994-fa163e34d433 0xc001bc1a77 0xc001bc1a78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 18 13:16:59.385: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Dec 18 13:16:59.385: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-g9882,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-g9882/replicasets/test-rollover-controller,UID:9339dbac-2198-11ea-a994-fa163e34d433,ResourceVersion:15241124,Generation:2,CreationTimestamp:2019-12-18 13:16:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 9b0cf068-2198-11ea-a994-fa163e34d433 0xc001bc17ef 0xc001bc1870}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 18 13:16:59.386: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-g9882,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-g9882/replicasets/test-rollover-deployment-58494b7559,UID:9b1d9efc-2198-11ea-a994-fa163e34d433,ResourceVersion:15241078,Generation:2,CreationTimestamp:2019-12-18 13:16:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 9b0cf068-2198-11ea-a994-fa163e34d433 0xc001bc1937 0xc001bc1938}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 18 13:16:59.393: INFO: Pod "test-rollover-deployment-5b8479fdb6-4pktr" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-4pktr,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-g9882,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g9882/pods/test-rollover-deployment-5b8479fdb6-4pktr,UID:9da70214-2198-11ea-a994-fa163e34d433,ResourceVersion:15241100,Generation:0,CreationTimestamp:2019-12-18 13:16:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 9cbbf6f5-2198-11ea-a994-fa163e34d433 0xc00266a147 0xc00266a148}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-88tsb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-88tsb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-88tsb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00266a1c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00266a1e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:16:35 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:16:47 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:16:47 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:16:35 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2019-12-18 13:16:35 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-18 13:16:46 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://96b88651e560585f49fed041032dcf0f703c9b8448bde53925c63769dcef1177}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 13:16:59.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-g9882" for this suite.
Dec 18 13:17:07.593: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:17:07.630: INFO: namespace: e2e-tests-deployment-g9882, resource: bindings, ignored listing per whitelist
Dec 18 13:17:07.738: INFO: namespace e2e-tests-deployment-g9882 deletion completed in 8.332204748s

• [SLOW TEST:50.237 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 13:17:07.740: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-b1bb75fe-2198-11ea-ad77-0242ac110004
STEP: Creating a pod to test consume secrets
Dec 18 13:17:09.165: INFO: Waiting up to 5m0s for pod "pod-secrets-b1e42e60-2198-11ea-ad77-0242ac110004" in namespace "e2e-tests-secrets-4hwb7" to be "success or failure"
Dec 18 13:17:09.175: INFO: Pod "pod-secrets-b1e42e60-2198-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.211882ms
Dec 18 13:17:11.189: INFO: Pod "pod-secrets-b1e42e60-2198-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0238201s
Dec 18 13:17:13.209: INFO: Pod "pod-secrets-b1e42e60-2198-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043637126s
Dec 18 13:17:15.795: INFO: Pod "pod-secrets-b1e42e60-2198-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.629902854s
Dec 18 13:17:17.858: INFO: Pod "pod-secrets-b1e42e60-2198-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.692873736s
Dec 18 13:17:19.873: INFO: Pod "pod-secrets-b1e42e60-2198-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.707343182s
STEP: Saw pod success
Dec 18 13:17:19.873: INFO: Pod "pod-secrets-b1e42e60-2198-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 13:17:19.882: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-b1e42e60-2198-11ea-ad77-0242ac110004 container secret-volume-test: 
STEP: delete the pod
Dec 18 13:17:21.136: INFO: Waiting for pod pod-secrets-b1e42e60-2198-11ea-ad77-0242ac110004 to disappear
Dec 18 13:17:21.160: INFO: Pod pod-secrets-b1e42e60-2198-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 13:17:21.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-4hwb7" for this suite.
Dec 18 13:17:27.560: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:17:27.758: INFO: namespace: e2e-tests-secrets-4hwb7, resource: bindings, ignored listing per whitelist
Dec 18 13:17:27.822: INFO: namespace e2e-tests-secrets-4hwb7 deletion completed in 6.629136441s
STEP: Destroying namespace "e2e-tests-secret-namespace-wjvhb" for this suite.
Dec 18 13:17:33.911: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:17:34.008: INFO: namespace: e2e-tests-secret-namespace-wjvhb, resource: bindings, ignored listing per whitelist
Dec 18 13:17:34.124: INFO: namespace e2e-tests-secret-namespace-wjvhb deletion completed in 6.30234009s

• [SLOW TEST:26.384 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 13:17:34.125: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Dec 18 13:17:34.748: INFO: Pod name pod-release: Found 0 pods out of 1
Dec 18 13:17:39.773: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 13:17:41.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-mkjr7" for this suite.
Dec 18 13:17:54.946: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:17:56.117: INFO: namespace: e2e-tests-replication-controller-mkjr7, resource: bindings, ignored listing per whitelist
Dec 18 13:17:56.268: INFO: namespace e2e-tests-replication-controller-mkjr7 deletion completed in 13.541000494s

• [SLOW TEST:22.144 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 13:17:56.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-ce37df18-2198-11ea-ad77-0242ac110004
STEP: Creating a pod to test consume secrets
Dec 18 13:17:56.689: INFO: Waiting up to 5m0s for pod "pod-secrets-ce38cc5f-2198-11ea-ad77-0242ac110004" in namespace "e2e-tests-secrets-ckkwv" to be "success or failure"
Dec 18 13:17:56.790: INFO: Pod "pod-secrets-ce38cc5f-2198-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 101.458496ms
Dec 18 13:17:59.377: INFO: Pod "pod-secrets-ce38cc5f-2198-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.688309735s
Dec 18 13:18:01.599: INFO: Pod "pod-secrets-ce38cc5f-2198-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.909752819s
Dec 18 13:18:04.109: INFO: Pod "pod-secrets-ce38cc5f-2198-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.420352706s
Dec 18 13:18:06.116: INFO: Pod "pod-secrets-ce38cc5f-2198-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.42691654s
Dec 18 13:18:08.130: INFO: Pod "pod-secrets-ce38cc5f-2198-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.441160524s
STEP: Saw pod success
Dec 18 13:18:08.130: INFO: Pod "pod-secrets-ce38cc5f-2198-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 13:18:08.135: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-ce38cc5f-2198-11ea-ad77-0242ac110004 container secret-volume-test: 
STEP: delete the pod
Dec 18 13:18:08.204: INFO: Waiting for pod pod-secrets-ce38cc5f-2198-11ea-ad77-0242ac110004 to disappear
Dec 18 13:18:08.334: INFO: Pod pod-secrets-ce38cc5f-2198-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 13:18:08.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-ckkwv" for this suite.
Dec 18 13:18:14.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:18:14.860: INFO: namespace: e2e-tests-secrets-ckkwv, resource: bindings, ignored listing per whitelist
Dec 18 13:18:14.926: INFO: namespace e2e-tests-secrets-ckkwv deletion completed in 6.572031696s

• [SLOW TEST:18.656 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 13:18:14.926: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 18 13:18:15.225: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d942b5d6-2198-11ea-ad77-0242ac110004" in namespace "e2e-tests-projected-z7jv7" to be "success or failure"
Dec 18 13:18:15.332: INFO: Pod "downwardapi-volume-d942b5d6-2198-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 106.880956ms
Dec 18 13:18:17.611: INFO: Pod "downwardapi-volume-d942b5d6-2198-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.386626528s
Dec 18 13:18:19.636: INFO: Pod "downwardapi-volume-d942b5d6-2198-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.411192062s
Dec 18 13:18:22.286: INFO: Pod "downwardapi-volume-d942b5d6-2198-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.061265345s
Dec 18 13:18:24.322: INFO: Pod "downwardapi-volume-d942b5d6-2198-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.097223898s
Dec 18 13:18:26.377: INFO: Pod "downwardapi-volume-d942b5d6-2198-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.152202624s
STEP: Saw pod success
Dec 18 13:18:26.377: INFO: Pod "downwardapi-volume-d942b5d6-2198-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 13:18:26.420: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-d942b5d6-2198-11ea-ad77-0242ac110004 container client-container: 
STEP: delete the pod
Dec 18 13:18:26.614: INFO: Waiting for pod downwardapi-volume-d942b5d6-2198-11ea-ad77-0242ac110004 to disappear
Dec 18 13:18:26.684: INFO: Pod downwardapi-volume-d942b5d6-2198-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 13:18:26.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-z7jv7" for this suite.
Dec 18 13:18:34.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:18:34.798: INFO: namespace: e2e-tests-projected-z7jv7, resource: bindings, ignored listing per whitelist
Dec 18 13:18:34.858: INFO: namespace e2e-tests-projected-z7jv7 deletion completed in 8.161410591s

• [SLOW TEST:19.932 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 13:18:34.859: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 18 13:18:35.086: INFO: Waiting up to 5m0s for pod "downward-api-e51ab1c9-2198-11ea-ad77-0242ac110004" in namespace "e2e-tests-downward-api-sf6c4" to be "success or failure"
Dec 18 13:18:35.239: INFO: Pod "downward-api-e51ab1c9-2198-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 152.233604ms
Dec 18 13:18:37.411: INFO: Pod "downward-api-e51ab1c9-2198-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.324778344s
Dec 18 13:18:39.426: INFO: Pod "downward-api-e51ab1c9-2198-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.339255752s
Dec 18 13:18:41.443: INFO: Pod "downward-api-e51ab1c9-2198-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.356869924s
Dec 18 13:18:43.956: INFO: Pod "downward-api-e51ab1c9-2198-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.869328848s
Dec 18 13:18:45.972: INFO: Pod "downward-api-e51ab1c9-2198-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.885878032s
STEP: Saw pod success
Dec 18 13:18:45.973: INFO: Pod "downward-api-e51ab1c9-2198-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 13:18:45.979: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-e51ab1c9-2198-11ea-ad77-0242ac110004 container dapi-container: 
STEP: delete the pod
Dec 18 13:18:46.951: INFO: Waiting for pod downward-api-e51ab1c9-2198-11ea-ad77-0242ac110004 to disappear
Dec 18 13:18:46.960: INFO: Pod downward-api-e51ab1c9-2198-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 13:18:46.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-sf6c4" for this suite.
Dec 18 13:18:55.102: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:18:55.357: INFO: namespace: e2e-tests-downward-api-sf6c4, resource: bindings, ignored listing per whitelist
Dec 18 13:18:55.486: INFO: namespace e2e-tests-downward-api-sf6c4 deletion completed in 8.440024446s

• [SLOW TEST:20.628 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 13:18:55.487: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Creating an uninitialized pod in the namespace
Dec 18 13:19:09.216: INFO: error from create uninitialized namespace: 
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 13:19:37.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-t5rrm" for this suite.
Dec 18 13:19:43.727: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:19:43.937: INFO: namespace: e2e-tests-namespaces-t5rrm, resource: bindings, ignored listing per whitelist
Dec 18 13:19:43.997: INFO: namespace e2e-tests-namespaces-t5rrm deletion completed in 6.353266101s
STEP: Destroying namespace "e2e-tests-nsdeletetest-9z2m8" for this suite.
Dec 18 13:19:44.007: INFO: Namespace e2e-tests-nsdeletetest-9z2m8 was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-5nmvc" for this suite.
Dec 18 13:19:50.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:19:50.169: INFO: namespace: e2e-tests-nsdeletetest-5nmvc, resource: bindings, ignored listing per whitelist
Dec 18 13:19:50.322: INFO: namespace e2e-tests-nsdeletetest-5nmvc deletion completed in 6.314352799s

• [SLOW TEST:54.835 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 13:19:50.323: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-123aa5d0-2199-11ea-ad77-0242ac110004
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-123aa5d0-2199-11ea-ad77-0242ac110004
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 13:21:32.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-zll4g" for this suite.
Dec 18 13:21:56.771: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:21:56.975: INFO: namespace: e2e-tests-projected-zll4g, resource: bindings, ignored listing per whitelist
Dec 18 13:21:57.003: INFO: namespace e2e-tests-projected-zll4g deletion completed in 24.270597525s

• [SLOW TEST:126.680 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 13:21:57.003: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 18 13:22:33.451: INFO: Container started at 2019-12-18 13:22:08 +0000 UTC, pod became ready at 2019-12-18 13:22:32 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 13:22:33.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-9l424" for this suite.
Dec 18 13:22:55.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:22:55.689: INFO: namespace: e2e-tests-container-probe-9l424, resource: bindings, ignored listing per whitelist
Dec 18 13:22:55.760: INFO: namespace e2e-tests-container-probe-9l424 deletion completed in 22.30006008s

• [SLOW TEST:58.757 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 13:22:55.762: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Dec 18 13:23:12.677: INFO: Successfully updated pod "labelsupdate80a4d3a9-2199-11ea-ad77-0242ac110004"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 13:23:14.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-lwdfg" for this suite.
Dec 18 13:23:40.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:23:41.039: INFO: namespace: e2e-tests-downward-api-lwdfg, resource: bindings, ignored listing per whitelist
Dec 18 13:23:41.058: INFO: namespace e2e-tests-downward-api-lwdfg deletion completed in 26.200109815s

• [SLOW TEST:45.297 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 13:23:41.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on tmpfs
Dec 18 13:23:41.458: INFO: Waiting up to 5m0s for pod "pod-9bb63094-2199-11ea-ad77-0242ac110004" in namespace "e2e-tests-emptydir-l87rf" to be "success or failure"
Dec 18 13:23:41.646: INFO: Pod "pod-9bb63094-2199-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 188.606541ms
Dec 18 13:23:43.669: INFO: Pod "pod-9bb63094-2199-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211260764s
Dec 18 13:23:45.682: INFO: Pod "pod-9bb63094-2199-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.224594879s
Dec 18 13:23:48.175: INFO: Pod "pod-9bb63094-2199-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.717035182s
Dec 18 13:23:50.202: INFO: Pod "pod-9bb63094-2199-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.743911013s
Dec 18 13:23:52.220: INFO: Pod "pod-9bb63094-2199-11ea-ad77-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.762465835s
Dec 18 13:23:54.273: INFO: Pod "pod-9bb63094-2199-11ea-ad77-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.815471258s
STEP: Saw pod success
Dec 18 13:23:54.274: INFO: Pod "pod-9bb63094-2199-11ea-ad77-0242ac110004" satisfied condition "success or failure"
Dec 18 13:23:54.284: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-9bb63094-2199-11ea-ad77-0242ac110004 container test-container: 
STEP: delete the pod
Dec 18 13:23:54.595: INFO: Waiting for pod pod-9bb63094-2199-11ea-ad77-0242ac110004 to disappear
Dec 18 13:23:54.628: INFO: Pod pod-9bb63094-2199-11ea-ad77-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 13:23:54.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-l87rf" for this suite.
Dec 18 13:24:03.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:24:03.153: INFO: namespace: e2e-tests-emptydir-l87rf, resource: bindings, ignored listing per whitelist
Dec 18 13:24:03.190: INFO: namespace e2e-tests-emptydir-l87rf deletion completed in 8.555942563s

• [SLOW TEST:22.131 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 18 13:24:03.190: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 18 13:24:03.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 18 13:24:15.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-xwkbv" for this suite.
Dec 18 13:25:01.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:25:01.964: INFO: namespace: e2e-tests-pods-xwkbv, resource: bindings, ignored listing per whitelist
Dec 18 13:25:02.094: INFO: namespace e2e-tests-pods-xwkbv deletion completed in 46.431199351s

• [SLOW TEST:58.904 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSDec 18 13:25:02.094: INFO: Running AfterSuite actions on all nodes
Dec 18 13:25:02.094: INFO: Running AfterSuite actions on node 1
Dec 18 13:25:02.094: INFO: Skipping dumping logs from cluster

Ran 199 of 2164 Specs in 9455.026 seconds
SUCCESS! -- 199 Passed | 0 Failed | 0 Pending | 1965 Skipped PASS